Sample records for simulation-based interval two-stage

  1. Statistical inference for extended or shortened phase II studies based on Simon's two-stage designs.

    PubMed

    Zhao, Junjun; Yu, Menggang; Feng, Xi-Ping

    2015-06-07

    Simon's two-stage designs are popular choices for conducting phase II clinical trials, especially in the oncology trials to reduce the number of patients placed on ineffective experimental therapies. Recently Koyama and Chen (2008) discussed how to conduct proper inference for such studies because they found that inference procedures used with Simon's designs almost always ignore the actual sampling plan used. In particular, they proposed an inference method for studies when the actual second stage sample sizes differ from planned ones. We consider an alternative inference method based on likelihood ratio. In particular, we order permissible sample paths under Simon's two-stage designs using their corresponding conditional likelihood. In this way, we can calculate p-values using the common definition: the probability of obtaining a test statistic value at least as extreme as that observed under the null hypothesis. In addition to providing inference for a couple of scenarios where Koyama and Chen's method can be difficult to apply, the resulting estimate based on our method appears to have certain advantage in terms of inference properties in many numerical simulations. It generally led to smaller biases and narrower confidence intervals while maintaining similar coverages. We also illustrated the two methods in a real data setting. Inference procedures used with Simon's designs almost always ignore the actual sampling plan. Reported P-values, point estimates and confidence intervals for the response rate are not usually adjusted for the design's adaptiveness. Proper statistical inference procedures should be used.

  2. Testing independence of bivariate interval-censored data using modified Kendall's tau statistic.

    PubMed

    Kim, Yuneung; Lim, Johan; Park, DoHwan

    2015-11-01

    In this paper, we study a nonparametric procedure to test independence of bivariate interval censored data; for both current status data (case 1 interval-censored data) and case 2 interval-censored data. To do it, we propose a score-based modification of the Kendall's tau statistic for bivariate interval-censored data. Our modification defines the Kendall's tau statistic with expected numbers of concordant and disconcordant pairs of data. The performance of the modified approach is illustrated by simulation studies and application to the AIDS study. We compare our method to alternative approaches such as the two-stage estimation method by Sun et al. (Scandinavian Journal of Statistics, 2006) and the multiple imputation method by Betensky and Finkelstein (Statistics in Medicine, 1999b). © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. A queuing-theory-based interval-fuzzy robust two-stage programming model for environmental management under uncertainty

    NASA Astrophysics Data System (ADS)

    Sun, Y.; Li, Y. P.; Huang, G. H.

    2012-06-01

    In this study, a queuing-theory-based interval-fuzzy robust two-stage programming (QB-IRTP) model is developed through introducing queuing theory into an interval-fuzzy robust two-stage (IRTP) optimization framework. The developed QB-IRTP model can not only address highly uncertain information for the lower and upper bounds of interval parameters but also be used for analysing a variety of policy scenarios that are associated with different levels of economic penalties when the promised targets are violated. Moreover, it can reflect uncertainties in queuing theory problems. The developed method has been applied to a case of long-term municipal solid waste (MSW) management planning. Interval solutions associated with different waste-generation rates, different waiting costs and different arriving rates have been obtained. They can be used for generating decision alternatives and thus help managers to identify desired MSW management policies under various economic objectives and system reliability constraints.

  4. Joint modelling compared with two stage methods for analysing longitudinal data and prospective outcomes: A simulation study of childhood growth and BP.

    PubMed

    Sayers, A; Heron, J; Smith, Adac; Macdonald-Wallis, C; Gilthorpe, M S; Steele, F; Tilling, K

    2017-02-01

    There is a growing debate with regards to the appropriate methods of analysis of growth trajectories and their association with prospective dependent outcomes. Using the example of childhood growth and adult BP, we conducted an extensive simulation study to explore four two-stage and two joint modelling methods, and compared their bias and coverage in estimation of the (unconditional) association between birth length and later BP, and the association between growth rate and later BP (conditional on birth length). We show that the two-stage method of using multilevel models to estimate growth parameters and relating these to outcome gives unbiased estimates of the conditional associations between growth and outcome. Using simulations, we demonstrate that the simple methods resulted in bias in the presence of measurement error, as did the two-stage multilevel method when looking at the total (unconditional) association of birth length with outcome. The two joint modelling methods gave unbiased results, but using the re-inflated residuals led to undercoverage of the confidence intervals. We conclude that either joint modelling or the simpler two-stage multilevel approach can be used to estimate conditional associations between growth and later outcomes, but that only joint modelling is unbiased with nominal coverage for unconditional associations.

  5. Using observed postconstruction peak discharges to evaluate a hydrologic and hydraulic design model, Boneyard Creek, Champaign and Urbana, Illinois

    USGS Publications Warehouse

    Over, Thomas M.; Soong, David T.; Holmes, Robert R.

    2011-01-01

    Boneyard Creek—which drains an urbanized watershed in the cities of Champaign and Urbana, Illinois, including part of the University of Illinois at Urbana-Champaign (UIUC) campus—has historically been prone to flooding. Using the Stormwater Management Model (SWMM), a hydrologic and hydraulic model of Boneyard Creek was developed for the design of the projects making up the first phase of a long-term plan for flood control on Boneyard Creek, and the construction of the projects was completed in May 2003. The U.S. Geological Survey, in cooperation with the Cities of Champaign and Urbana and UIUC, installed and operated stream and rain gages in order to obtain data for evaluation of the design-model simulations. In this study, design-model simulations were evaluated by using observed postconstruction precipitation and peak-discharge data. Between May 2003 and September 2008, five high-flow events on Boneyard Creek satisfied the study criterion. The five events were simulated with the design model by using observed precipitation. The simulations were run with two different values of the parameter controlling the soil moisture at the beginning of the storms and two different ways of spatially distributing the precipitation, making a total of four simulation scenarios. The simulated and observed peak discharges and stages were compared at gaged locations along the Creek. The discharge at one of these locations was deemed to be critical for evaluating the design model. The uncertainty of the measured peak discharge was also estimated at the critical location with a method based on linear regression of the stage and discharge relation, an estimate of the uncertainty of the acoustic Doppler velocity meter measurements, and the uncertainty of the stage measurements. For four of the five events, the simulated peak discharges lie within the 95-percent confidence interval of the observed peak discharges at the critical location; the fifth was just outside the upper end of this interval. For two of the four simulation scenarios, the simulation results for one event at the critical location were numerically unstable in the vicinity of the discharge peak. For the remaining scenarios, the simulated peak discharges over the five events at the critical location differ from the observed peak discharges (simulated minus observed) by an average of 7.7 and -1.5 percent, respectively. The simulated peak discharges over the four events for which all scenarios have numerically stable results at the critical location differs from the observed peak discharges (simulated minus observed) by an average of -6.8, 4.0, -5.4, and 1.5 percent, for the four scenarios, respectively. Overall, the discharge peaks simulated for this study at the critical location are approximately balanced between overprediction and underprediction and do not indicate significant model bias or inaccuracy. Additional comparisons were made by using peak stages at the critical location and two additional sites and using peak discharges at one additional site. These comparisons showed the same pattern of differences between observed and simulated values across events but varying biases depending on streamgage and measurement type (discharge or stage). Altogether, the results from this study show no clear evidence that the design model is significantly inaccurate or biased and, therefore, no clear evidence that the modeled flood-control projects in Champaign and on the University of Illinois campus have increased flood stages or discharges downstream in Urbana.

  6. An inventory-theory-based interval-parameter two-stage stochastic programming model for water resources management

    NASA Astrophysics Data System (ADS)

    Suo, M. Q.; Li, Y. P.; Huang, G. H.

    2011-09-01

    In this study, an inventory-theory-based interval-parameter two-stage stochastic programming (IB-ITSP) model is proposed through integrating inventory theory into an interval-parameter two-stage stochastic optimization framework. This method can not only address system uncertainties with complex presentation but also reflect transferring batch (the transferring quantity at once) and period (the corresponding cycle time) in decision making problems. A case of water allocation problems in water resources management planning is studied to demonstrate the applicability of this method. Under different flow levels, different transferring measures are generated by this method when the promised water cannot be met. Moreover, interval solutions associated with different transferring costs also have been provided. They can be used for generating decision alternatives and thus help water resources managers to identify desired policies. Compared with the ITSP method, the IB-ITSP model can provide a positive measure for solving water shortage problems and afford useful information for decision makers under uncertainty.

  7. Teaching basic life support with an automated external defibrillator using the two-stage or the four-stage teaching technique.

    PubMed

    Bjørnshave, Katrine; Krogh, Lise Q; Hansen, Svend B; Nebsbjerg, Mette A; Thim, Troels; Løfgren, Bo

    2018-02-01

    Laypersons often hesitate to perform basic life support (BLS) and use an automated external defibrillator (AED) because of self-perceived lack of knowledge and skills. Training may reduce the barrier to intervene. Reduced training time and costs may allow training of more laypersons. The aim of this study was to compare BLS/AED skills' acquisition and self-evaluated BLS/AED skills after instructor-led training with a two-stage versus a four-stage teaching technique. Laypersons were randomized to either two-stage or four-stage teaching technique courses. Immediately after training, the participants were tested in a simulated cardiac arrest scenario to assess their BLS/AED skills. Skills were assessed using the European Resuscitation Council BLS/AED assessment form. The primary endpoint was passing the test (17 of 17 skills adequately performed). A prespecified noninferiority margin of 20% was used. The two-stage teaching technique (n=72, pass rate 57%) was noninferior to the four-stage technique (n=70, pass rate 59%), with a difference in pass rates of -2%; 95% confidence interval: -18 to 15%. Neither were there significant differences between the two-stage and four-stage groups in the chest compression rate (114±12 vs. 115±14/min), chest compression depth (47±9 vs. 48±9 mm) and number of sufficient rescue breaths between compression cycles (1.7±0.5 vs. 1.6±0.7). In both groups, all participants believed that their training had improved their skills. Teaching laypersons BLS/AED using the two-stage teaching technique was noninferior to the four-stage teaching technique, although the pass rate was -2% (95% confidence interval: -18 to 15%) lower with the two-stage teaching technique.

  8. Statistical inference for the within-device precision of quantitative measurements in assay validation.

    PubMed

    Liu, Jen-Pei; Lu, Li-Tien; Liao, C T

    2009-09-01

    Intermediate precision is one of the most important characteristics for evaluation of precision in assay validation. The current methods for evaluation of within-device precision recommended by the Clinical Laboratory Standard Institute (CLSI) guideline EP5-A2 are based on the point estimator. On the other hand, in addition to point estimators, confidence intervals can provide a range for the within-device precision with a probability statement. Therefore, we suggest a confidence interval approach for assessment of the within-device precision. Furthermore, under the two-stage nested random-effects model recommended by the approved CLSI guideline EP5-A2, in addition to the current Satterthwaite's approximation and the modified large sample (MLS) methods, we apply the technique of generalized pivotal quantities (GPQ) to derive the confidence interval for the within-device precision. The data from the approved CLSI guideline EP5-A2 illustrate the applications of the confidence interval approach and comparison of results between the three methods. Results of a simulation study on the coverage probability and expected length of the three methods are reported. The proposed method of the GPQ-based confidence intervals is also extended to consider the between-laboratories variation for precision assessment.

  9. Inconsistencies in Numerical Simulations of Dynamical Systems Using Interval Arithmetic

    NASA Astrophysics Data System (ADS)

    Nepomuceno, Erivelton G.; Peixoto, Márcia L. C.; Martins, Samir A. M.; Rodrigues, Heitor M.; Perc, Matjaž

    Over the past few decades, interval arithmetic has been attracting widespread interest from the scientific community. With the expansion of computing power, scientific computing is encountering a noteworthy shift from floating-point arithmetic toward increased use of interval arithmetic. Notwithstanding the significant reliability of interval arithmetic, this paper presents a theoretical inconsistency in a simulation of dynamical systems using a well-known implementation of arithmetic interval. We have observed that two natural interval extensions present an empty intersection during a finite time range, which is contrary to the fundamental theorem of interval analysis. We have proposed a procedure to at least partially overcome this problem, based on the union of the two generated pseudo-orbits. This paper also shows a successful case of interval arithmetic application in the reduction of interval width size on the simulation of discrete map. The implications of our findings on the reliability of scientific computing using interval arithmetic have been properly addressed using two numerical examples.

  10. Performance of toxicity probability interval based designs in contrast to the continual reassessment method

    PubMed Central

    Horton, Bethany Jablonski; Wages, Nolan A.; Conaway, Mark R.

    2016-01-01

    Toxicity probability interval designs have received increasing attention as a dose-finding method in recent years. In this study, we compared the two-stage, likelihood-based continual reassessment method (CRM), modified toxicity probability interval (mTPI), and the Bayesian optimal interval design (BOIN) in order to evaluate each method's performance in dose selection for Phase I trials. We use several summary measures to compare the performance of these methods, including percentage of correct selection (PCS) of the true maximum tolerable dose (MTD), allocation of patients to doses at and around the true MTD, and an accuracy index. This index is an efficiency measure that describes the entire distribution of MTD selection and patient allocation by taking into account the distance between the true probability of toxicity at each dose level and the target toxicity rate. The simulation study considered a broad range of toxicity curves and various sample sizes. When considering PCS, we found that CRM outperformed the two competing methods in most scenarios, followed by BOIN, then mTPI. We observed a similar trend when considering the accuracy index for dose allocation, where CRM most often outperformed both the mTPI and BOIN. These trends were more pronounced with increasing number of dose levels. PMID:27435150

  11. Confidence interval estimation of the difference between two sensitivities to the early disease stage.

    PubMed

    Dong, Tuochuan; Kang, Le; Hutson, Alan; Xiong, Chengjie; Tian, Lili

    2014-03-01

    Although most of the statistical methods for diagnostic studies focus on disease processes with binary disease status, many diseases can be naturally classified into three ordinal diagnostic categories, that is normal, early stage, and fully diseased. For such diseases, the volume under the ROC surface (VUS) is the most commonly used index of diagnostic accuracy. Because the early disease stage is most likely the optimal time window for therapeutic intervention, the sensitivity to the early diseased stage has been suggested as another diagnostic measure. For the purpose of comparing the diagnostic abilities on early disease detection between two markers, it is of interest to estimate the confidence interval of the difference between sensitivities to the early diseased stage. In this paper, we present both parametric and non-parametric methods for this purpose. An extensive simulation study is carried out for a variety of settings for the purpose of evaluating and comparing the performance of the proposed methods. A real example of Alzheimer's disease (AD) is analyzed using the proposed approaches. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. A coupled surface-water and ground-water flow model (MODBRANCH) for simulation of stream-aquifer interaction

    USGS Publications Warehouse

    Swain, Eric D.; Wexler, Eliezer J.

    1996-01-01

    Ground-water and surface-water flow models traditionally have been developed separately, with interaction between subsurface flow and streamflow either not simulated at all or accounted for by simple formulations. In areas with dynamic and hydraulically well-connected ground-water and surface-water systems, stream-aquifer interaction should be simulated using deterministic responses of both systems coupled at the stream-aquifer interface. Accordingly, a new coupled ground-water and surface-water model was developed by combining the U.S. Geological Survey models MODFLOW and BRANCH; the interfacing code is referred to as MODBRANCH. MODFLOW is the widely used modular three-dimensional, finite-difference ground-water model, and BRANCH is a one-dimensional numerical model commonly used to simulate unsteady flow in open- channel networks. MODFLOW was originally written with the River package, which calculates leakage between the aquifer and stream, assuming that the stream's stage remains constant during one model stress period. A simple streamflow routing model has been added to MODFLOW, but is limited to steady flow in rectangular, prismatic channels. To overcome these limitations, the BRANCH model, which simulates unsteady, nonuniform flow by solving the St. Venant equations, was restructured and incorporated into MODFLOW. Terms that describe leakage between stream and aquifer as a function of streambed conductance and differences in aquifer and stream stage were added to the continuity equation in BRANCH. Thus, leakage between the aquifer and stream can be calculated separately in each model, or leakages calculated in BRANCH can be used in MODFLOW. Total mass in the coupled models is accounted for and conserved. The BRANCH model calculates new stream stages for each time interval in a transient simulation based on upstream boundary conditions, stream properties, and initial estimates of aquifer heads. Next, aquifer heads are calculated in MODFLOW based on stream stages calculated by BRANCH, aquifer properties, and stresses. This process is repeated until convergence criteria are met for head and stage. Because time steps used in ground-water modeling can be much longer than time intervals used in surface- water simulations, provision has been made for handling multiple BRANCH time intervals within one MODFLOW time step. An option was also added to BRANCH to allow the simulation of channel drying and rewetting. Testing of the coupled model was verified by using data from previous studies; by comparing results with output from a simpler, four-point implicit, open-channel flow model linked with MODFLOW; and by comparison to field studies of L-31N canal in southern Florida.

  13. A Bayesian-based two-stage inexact optimization method for supporting stream water quality management in the Three Gorges Reservoir region.

    PubMed

    Hu, X H; Li, Y P; Huang, G H; Zhuang, X W; Ding, X W

    2016-05-01

    In this study, a Bayesian-based two-stage inexact optimization (BTIO) method is developed for supporting water quality management through coupling Bayesian analysis with interval two-stage stochastic programming (ITSP). The BTIO method is capable of addressing uncertainties caused by insufficient inputs in water quality model as well as uncertainties expressed as probabilistic distributions and interval numbers. The BTIO method is applied to a real case of water quality management for the Xiangxi River basin in the Three Gorges Reservoir region to seek optimal water quality management schemes under various uncertainties. Interval solutions for production patterns under a range of probabilistic water quality constraints have been generated. Results obtained demonstrate compromises between the system benefit and the system failure risk due to inherent uncertainties that exist in various system components. Moreover, information about pollutant emission is accomplished, which would help managers to adjust production patterns of regional industry and local policies considering interactions of water quality requirement, economic benefit, and industry structure.

  14. Sensitivity of diabetic retinopathy associated vision loss to screening interval in an agent-based/discrete event simulation model.

    PubMed

    Day, T Eugene; Ravi, Nathan; Xian, Hong; Brugh, Ann

    2014-04-01

    To examine the effect of changes to screening interval on the incidence of vision loss in a simulated cohort of Veterans with diabetic retinopathy (DR). This simulation allows us to examine potential interventions without putting patients at risk. Simulated randomized controlled trial. We develop a hybrid agent-based/discrete event simulation which incorporates a population of simulated Veterans--using abstracted data from a retrospective cohort of real-world diabetic Veterans--with a discrete event simulation (DES) eye clinic at which it seeks treatment for DR. We compare vision loss under varying screening policies, in a simulated population of 5000 Veterans over 50 independent ten-year simulation runs for each group. Diabetic Retinopathy associated vision loss increased as the screening interval was extended from one to five years (p<0.0001). This increase was concentrated in the third year of the screening interval (p<0.01). There was no increase in vision loss associated with increasing the screening interval from one year to two years (p=0.98). Increasing the screening interval for diabetic patients who have not yet developed diabetic retinopathy from 1 to 2 years appears safe, while increasing the interval to 3 years heightens risk for vision loss. Published by Elsevier Ltd.

  15. Using pilot data to size a two-arm randomized trial to find a nearly optimal personalized treatment strategy.

    PubMed

    Laber, Eric B; Zhao, Ying-Qi; Regh, Todd; Davidian, Marie; Tsiatis, Anastasios; Stanford, Joseph B; Zeng, Donglin; Song, Rui; Kosorok, Michael R

    2016-04-15

    A personalized treatment strategy formalizes evidence-based treatment selection by mapping patient information to a recommended treatment. Personalized treatment strategies can produce better patient outcomes while reducing cost and treatment burden. Thus, among clinical and intervention scientists, there is a growing interest in conducting randomized clinical trials when one of the primary aims is estimation of a personalized treatment strategy. However, at present, there are no appropriate sample size formulae to assist in the design of such a trial. Furthermore, because the sampling distribution of the estimated outcome under an estimated optimal treatment strategy can be highly sensitive to small perturbations in the underlying generative model, sample size calculations based on standard (uncorrected) asymptotic approximations or computer simulations may not be reliable. We offer a simple and robust method for powering a single stage, two-armed randomized clinical trial when the primary aim is estimating the optimal single stage personalized treatment strategy. The proposed method is based on inverting a plugin projection confidence interval and is thereby regular and robust to small perturbations of the underlying generative model. The proposed method requires elicitation of two clinically meaningful parameters from clinical scientists and uses data from a small pilot study to estimate nuisance parameters, which are not easily elicited. The method performs well in simulated experiments and is illustrated using data from a pilot study of time to conception and fertility awareness. Copyright © 2015 John Wiley & Sons, Ltd.

  16. Determination of prospective displacement-based gate threshold for respiratory-gated radiation delivery from retrospective phase-based gate threshold selected at 4D CT simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vedam, S.; Archambault, L.; Starkschall, G.

    2007-11-15

    Four-dimensional (4D) computed tomography (CT) imaging has found increasing importance in the localization of tumor and surrounding normal structures throughout the respiratory cycle. Based on such tumor motion information, it is possible to identify the appropriate phase interval for respiratory gated treatment planning and delivery. Such a gating phase interval is determined retrospectively based on tumor motion from internal tumor displacement. However, respiratory-gated treatment is delivered prospectively based on motion determined predominantly from an external monitor. Therefore, the simulation gate threshold determined from the retrospective phase interval selected for gating at 4D CT simulation may not correspond to the deliverymore » gate threshold that is determined from the prospective external monitor displacement at treatment delivery. The purpose of the present work is to establish a relationship between the thresholds for respiratory gating determined at CT simulation and treatment delivery, respectively. One hundred fifty external respiratory motion traces, from 90 patients, with and without audio-visual biofeedback, are analyzed. Two respiratory phase intervals, 40%-60% and 30%-70%, are chosen for respiratory gating from the 4D CT-derived tumor motion trajectory. From residual tumor displacements within each such gating phase interval, a simulation gate threshold is defined based on (a) the average and (b) the maximum respiratory displacement within the phase interval. The duty cycle for prospective gated delivery is estimated from the proportion of external monitor displacement data points within both the selected phase interval and the simulation gate threshold. The delivery gate threshold is then determined iteratively to match the above determined duty cycle. The magnitude of the difference between such gate thresholds determined at simulation and treatment delivery is quantified in each case. Phantom motion tests yielded coincidence of simulation and delivery gate thresholds to within 0.3%. For patient data analysis, differences between simulation and delivery gate thresholds are reported as a fraction of the total respiratory motion range. For the smaller phase interval, the differences between simulation and delivery gate thresholds are 8{+-}11% and 14{+-}21% with and without audio-visual biofeedback, respectively, when the simulation gate threshold is determined based on the mean respiratory displacement within the 40%-60% gating phase interval. For the longer phase interval, corresponding differences are 4{+-}7% and 8{+-}15% with and without audio-visual biofeedback, respectively. Alternatively, when the simulation gate threshold is determined based on the maximum average respiratory displacement within the gating phase interval, greater differences between simulation and delivery gate thresholds are observed. A relationship between retrospective simulation gate threshold and prospective delivery gate threshold for respiratory gating is established and validated for regular and nonregular respiratory motion. Using this relationship, the delivery gate threshold can be reliably estimated at the time of 4D CT simulation, thereby improving the accuracy and efficiency of respiratory-gated radiation delivery.« less

  17. Determination of prospective displacement-based gate threshold for respiratory-gated radiation delivery from retrospective phase-based gate threshold selected at 4D CT simulation.

    PubMed

    Vedam, S; Archambault, L; Starkschall, G; Mohan, R; Beddar, S

    2007-11-01

    Four-dimensional (4D) computed tomography (CT) imaging has found increasing importance in the localization of tumor and surrounding normal structures throughout the respiratory cycle. Based on such tumor motion information, it is possible to identify the appropriate phase interval for respiratory gated treatment planning and delivery. Such a gating phase interval is determined retrospectively based on tumor motion from internal tumor displacement. However, respiratory-gated treatment is delivered prospectively based on motion determined predominantly from an external monitor. Therefore, the simulation gate threshold determined from the retrospective phase interval selected for gating at 4D CT simulation may not correspond to the delivery gate threshold that is determined from the prospective external monitor displacement at treatment delivery. The purpose of the present work is to establish a relationship between the thresholds for respiratory gating determined at CT simulation and treatment delivery, respectively. One hundred fifty external respiratory motion traces, from 90 patients, with and without audio-visual biofeedback, are analyzed. Two respiratory phase intervals, 40%-60% and 30%-70%, are chosen for respiratory gating from the 4D CT-derived tumor motion trajectory. From residual tumor displacements within each such gating phase interval, a simulation gate threshold is defined based on (a) the average and (b) the maximum respiratory displacement within the phase interval. The duty cycle for prospective gated delivery is estimated from the proportion of external monitor displacement data points within both the selected phase interval and the simulation gate threshold. The delivery gate threshold is then determined iteratively to match the above determined duty cycle. The magnitude of the difference between such gate thresholds determined at simulation and treatment delivery is quantified in each case. Phantom motion tests yielded coincidence of simulation and delivery gate thresholds to within 0.3%. For patient data analysis, differences between simulation and delivery gate thresholds are reported as a fraction of the total respiratory motion range. For the smaller phase interval, the differences between simulation and delivery gate thresholds are 8 +/- 11% and 14 +/- 21% with and without audio-visual biofeedback, respectively, when the simulation gate threshold is determined based on the mean respiratory displacement within the 40%-60% gating phase interval. For the longer phase interval, corresponding differences are 4 +/- 7% and 8 +/- 15% with and without audiovisual biofeedback, respectively. Alternatively, when the simulation gate threshold is determined based on the maximum average respiratory displacement within the gating phase interval, greater differences between simulation and delivery gate thresholds are observed. A relationship between retrospective simulation gate threshold and prospective delivery gate threshold for respiratory gating is established and validated for regular and nonregular respiratory motion. Using this relationship, the delivery gate threshold can be reliably estimated at the time of 4D CT simulation, thereby improving the accuracy and efficiency of respiratory-gated radiation delivery.

  18. European vegetation during Marine Oxygen Isotope Stage-3

    NASA Astrophysics Data System (ADS)

    Huntley, Brian; Alfano, Mary J. o.; Allen, Judy R. M.; Pollard, Dave; Tzedakis, Polychronis C.; de Beaulieu, Jacques-Louis; Grüger, Eberhard; Watts, Bill

    2003-03-01

    European vegetation during representative "warm" and "cold" intervals of stage-3 was inferred from pollen analytical data. The inferred vegetation differs in character and spatial pattern from that of both fully glacial and fully interglacial conditions and exhibits contrasts between warm and cold intervals, consistent with other evidence for stage-3 palaeoenvironmental fluctuations. European vegetation thus appears to have been an integral component of millennial environmental fluctuations during stage-3; vegetation responded to this scale of environmental change and through feedback mechanisms may have had effects upon the environment. The pollen-inferred vegetation was compared with vegetation simulated using the BIOME 3.5 vegetation model for climatic conditions simulated using a regional climate model (RegCM2) nested within a coupled global climate and vegetation model (GENESIS-BIOME). Despite some discrepancies in detail, both approaches capture the principal features of the present vegetation of Europe. The simulated vegetation for stage-3 differs markedly from that inferred from pollen analytical data, implying substantial discrepancy between the simulated climate and that actually prevailing. Sensitivity analyses indicate that the simulated climate is too warm and probably has too short a winter season. These discrepancies may reflect incorrect specification of sea surface temperature or sea-ice conditions and may be exacerbated by vegetation-climate feedback in the coupled global model.

  19. Influence of the Size of Cohorts in Adaptive Design for Nonlinear Mixed Effects Models: An Evaluation by Simulation for a Pharmacokinetic and Pharmacodynamic Model for a Biomarker in Oncology

    PubMed Central

    Lestini, Giulia; Dumont, Cyrielle; Mentré, France

    2015-01-01

    Purpose In this study we aimed to evaluate adaptive designs (ADs) by clinical trial simulation for a pharmacokinetic-pharmacodynamic model in oncology and to compare them with one-stage designs, i.e. when no adaptation is performed, using wrong prior parameters. Methods We evaluated two one-stage designs, ξ0 and ξ*, optimised for prior and true population parameters, Ψ0 and Ψ*, and several ADs (two-, three- and five-stage). All designs had 50 patients. For ADs, the first cohort design was ξ0. The next cohort design was optimised using prior information updated from the previous cohort. Optimal design was based on the determinant of the Fisher information matrix using PFIM. Design evaluation was performed by clinical trial simulations using data simulated from Ψ*. Results Estimation results of two-stage ADs and ξ* were close and much better than those obtained with ξ0. The balanced two-stage AD performed better than two-stage ADs with different cohort sizes. Three-and five-stage ADs were better than two-stage with small first cohort, but not better than the balanced two-stage design. Conclusions Two-stage ADs are useful when prior parameters are unreliable. In case of small first cohort, more adaptations are needed but these designs are complex to implement. PMID:26123680

  20. Influence of the Size of Cohorts in Adaptive Design for Nonlinear Mixed Effects Models: An Evaluation by Simulation for a Pharmacokinetic and Pharmacodynamic Model for a Biomarker in Oncology.

    PubMed

    Lestini, Giulia; Dumont, Cyrielle; Mentré, France

    2015-10-01

    In this study we aimed to evaluate adaptive designs (ADs) by clinical trial simulation for a pharmacokinetic-pharmacodynamic model in oncology and to compare them with one-stage designs, i.e., when no adaptation is performed, using wrong prior parameters. We evaluated two one-stage designs, ξ0 and ξ*, optimised for prior and true population parameters, Ψ0 and Ψ*, and several ADs (two-, three- and five-stage). All designs had 50 patients. For ADs, the first cohort design was ξ0. The next cohort design was optimised using prior information updated from the previous cohort. Optimal design was based on the determinant of the Fisher information matrix using PFIM. Design evaluation was performed by clinical trial simulations using data simulated from Ψ*. Estimation results of two-stage ADs and ξ * were close and much better than those obtained with ξ 0. The balanced two-stage AD performed better than two-stage ADs with different cohort sizes. Three- and five-stage ADs were better than two-stage with small first cohort, but not better than the balanced two-stage design. Two-stage ADs are useful when prior parameters are unreliable. In case of small first cohort, more adaptations are needed but these designs are complex to implement.

  1. A 45 ps time digitizer with a two-phase clock and dual-edge two-stage interpolation in a field programmable gate array device

    NASA Astrophysics Data System (ADS)

    Szplet, R.; Kalisz, J.; Jachna, Z.

    2009-02-01

    We present a time digitizer having 45 ps resolution, integrated in a field programmable gate array (FPGA) device. The time interval measurement is based on the two-stage interpolation method. A dual-edge two-phase interpolator is driven by the on-chip synthesized 250 MHz clock with precise phase adjustment. An improved dual-edge double synchronizer was developed to control the main counter. The nonlinearity of the digitizer's transfer characteristic is identified and utilized by the dedicated hardware code processor for the on-the-fly correction of the output data. Application of presented ideas has resulted in the measurement uncertainty of the digitizer below 70 ps RMS over the time interval ranging from 0 to 1 s. The use of the two-stage interpolation and a fast FIFO memory has allowed us to obtain the maximum measurement rate of five million measurements per second.

  2. Heuristic for Critical Machine Based a Lot Streaming for Two-Stage Hybrid Production Environment

    NASA Astrophysics Data System (ADS)

    Vivek, P.; Saravanan, R.; Chandrasekaran, M.; Pugazhenthi, R.

    2017-03-01

    Lot streaming in Hybrid flowshop [HFS] is encountered in many real world problems. This paper deals with a heuristic approach for Lot streaming based on critical machine consideration for a two stage Hybrid Flowshop. The first stage has two identical parallel machines and the second stage has only one machine. In the second stage machine is considered as a critical by valid reasons these kind of problems is known as NP hard. A mathematical model developed for the selected problem. The simulation modelling and analysis were carried out in Extend V6 software. The heuristic developed for obtaining optimal lot streaming schedule. The eleven cases of lot streaming were considered. The proposed heuristic was verified and validated by real time simulation experiments. All possible lot streaming strategies and possible sequence under each lot streaming strategy were simulated and examined. The heuristic consistently yielded optimal schedule consistently in all eleven cases. The identification procedure for select best lot streaming strategy was suggested.

  3. Uncertainty analysis of neural network based flood forecasting models: An ensemble based approach for constructing prediction interval

    NASA Astrophysics Data System (ADS)

    Kasiviswanathan, K.; Sudheer, K.

    2013-05-01

    Artificial neural network (ANN) based hydrologic models have gained lot of attention among water resources engineers and scientists, owing to their potential for accurate prediction of flood flows as compared to conceptual or physics based hydrologic models. The ANN approximates the non-linear functional relationship between the complex hydrologic variables in arriving at the river flow forecast values. Despite a large number of applications, there is still some criticism that ANN's point prediction lacks in reliability since the uncertainty of predictions are not quantified, and it limits its use in practical applications. A major concern in application of traditional uncertainty analysis techniques on neural network framework is its parallel computing architecture with large degrees of freedom, which makes the uncertainty assessment a challenging task. Very limited studies have considered assessment of predictive uncertainty of ANN based hydrologic models. In this study, a novel method is proposed that help construct the prediction interval of ANN flood forecasting model during calibration itself. The method is designed to have two stages of optimization during calibration: at stage 1, the ANN model is trained with genetic algorithm (GA) to obtain optimal set of weights and biases vector, and during stage 2, the optimal variability of ANN parameters (obtained in stage 1) is identified so as to create an ensemble of predictions. During the 2nd stage, the optimization is performed with multiple objectives, (i) minimum residual variance for the ensemble mean, (ii) maximum measured data points to fall within the estimated prediction interval and (iii) minimum width of prediction interval. The method is illustrated using a real world case study of an Indian basin. The method was able to produce an ensemble that has an average prediction interval width of 23.03 m3/s, with 97.17% of the total validation data points (measured) lying within the interval. The derived prediction interval for a selected hydrograph in the validation data set is presented in Fig 1. It is noted that most of the observed flows lie within the constructed prediction interval, and therefore provides information about the uncertainty of the prediction. One specific advantage of the method is that when ensemble mean value is considered as a forecast, the peak flows are predicted with improved accuracy by this method compared to traditional single point forecasted ANNs. Fig. 1 Prediction Interval for selected hydrograph

  4. Visceral Leishmaniasis on the Indian Subcontinent: Modelling the Dynamic Relationship between Vector Control Schemes and Vector Life Cycles.

    PubMed

    Poché, David M; Grant, William E; Wang, Hsiao-Hsuan

    2016-08-01

    Visceral leishmaniasis (VL) is a disease caused by two known vector-borne parasite species (Leishmania donovani, L. infantum), transmitted to man by phlebotomine sand flies (species: Phlebotomus and Lutzomyia), resulting in ≈50,000 human fatalities annually, ≈67% occurring on the Indian subcontinent. Indoor residual spraying is the current method of sand fly control in India, but alternative means of vector control, such as the treatment of livestock with systemic insecticide-based drugs, are being evaluated. We describe an individual-based, stochastic, life-stage-structured model that represents a sand fly vector population within a village in India and simulates the effects of vector control via fipronil-based drugs orally administered to cattle, which target both blood-feeding adults and larvae that feed on host feces. Simulation results indicated efficacy of fipronil-based control schemes in reducing sand fly abundance depended on timing of drug applications relative to seasonality of the sand fly life cycle. Taking into account cost-effectiveness and logistical feasibility, two of the most efficacious treatment schemes reduced population peaks occurring from April through August by ≈90% (applications 3 times per year at 2-month intervals initiated in March) and >95% (applications 6 times per year at 2-month intervals initiated in January) relative to no control, with the cumulative number of sand fly days occurring April-August reduced by ≈83% and ≈97%, respectively, and more specifically during the summer months of peak human exposure (June-August) by ≈85% and ≈97%, respectively. Our model should prove useful in a priori evaluation of the efficacy of fipronil-based drugs in controlling leishmaniasis on the Indian subcontinent and beyond.

  5. Development of gestation-specific reference intervals for thyroid hormones in normal pregnant Northeast Chinese women: What is the rational division of gestation stages for establishing reference intervals for pregnancy women?

    PubMed

    Liu, Jianhua; Yu, Xiaojun; Xia, Meng; Cai, Hong; Cheng, Guixue; Wu, Lina; Li, Qiang; Zhang, Ying; Sheng, Mengyuan; Liu, Yong; Qin, Xiaosong

    2017-04-01

    A laboratory- and region-specific trimester-related reference interval for thyroid hormone assessment of pregnant women was recommended. Whether the division by trimester is suitable requires verification. Here, we tried to establish appropriate reference intervals of thyroid-related hormones and antibodies for normal pregnant women in Northeast China. A total of 947 pregnant women who underwent routine prenatal care were grouped via two methods. The first method entailed division by trimester: stages T1, T2, and T3. The second method entailed dividing T1, T2, and T3 stages into two stages each: T1-1, T1-2, T2-1, T2-2, T3-1, and T3-2. Serum levels of TSH, FT3, FT4, Anti-TPO, and Anti-TG were measured by three detection systems. No significant differences were found in TSH values between T1-1 group and the non-pregnant women group. However, the TSH value of the T1-1 group was significantly higher than that of T1-2 group (P<0.05). The TSH values in stage T3-2 increased significantly compared to those in stage T3-1 measured by three different assays (P<0.05). FT4 and FT3 values decreased significantly in the T2-1 and T2-2 stages compared to the previous stage (P<0.05). The serum levels of Anti-TPO and Anti-TG were not having significant differences between the six stages. The diagnosis and treatment of thyroid dysfunction during pregnancy should base on pregnancy- and method-specific reference intervals. More detailed staging is required to assess the thyroid function of pregnant women before 20 gestational weeks. Copyright © 2016 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  6. Cost-effectiveness analysis of population-based screening of hepatocellular carcinoma: Comparing ultrasonography with two-stage screening

    PubMed Central

    Kuo, Ming-Jeng; Chen, Hsiu-Hsi; Chen, Chi-Ling; Fann, Jean Ching-Yuan; Chen, Sam Li-Sheng; Chiu, Sherry Yueh-Hsia; Lin, Yu-Min; Liao, Chao-Sheng; Chang, Hung-Chuen; Lin, Yueh-Shih; Yen, Amy Ming-Fang

    2016-01-01

    AIM: To assess the cost-effectiveness of two population-based hepatocellular carcinoma (HCC) screening programs, two-stage biomarker-ultrasound method and mass screening using abdominal ultrasonography (AUS). METHODS: In this study, we applied a Markov decision model with a societal perspective and a lifetime horizon for the general population-based cohorts in an area with high HCC incidence, such as Taiwan. The accuracy of biomarkers and ultrasonography was estimated from published meta-analyses. The costs of surveillance, diagnosis, and treatment were based on a combination of published literature, Medicare payments, and medical expenditure at the National Taiwan University Hospital. The main outcome measure was cost per life-year gained with a 3% annual discount rate. RESULTS: The results show that the mass screening using AUS was associated with an incremental cost-effectiveness ratio of USD39825 per life-year gained, whereas two-stage screening was associated with an incremental cost-effectiveness ratio of USD49733 per life-year gained, as compared with no screening. Screening programs with an initial screening age of 50 years old and biennial screening interval were the most cost-effective. These findings were sensitive to the costs of screening tools and the specificity of biomarker screening. CONCLUSION: Mass screening using AUS is more cost effective than two-stage biomarker-ultrasound screening. The most optimal strategy is an initial screening age at 50 years old with a 2-year inter-screening interval. PMID:27022228

  7. Simulation and Analyses of Stage Separation Two-Stage Reusable Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Pamadi, Bandu N.; Neirynck, Thomas A.; Hotchko, Nathaniel J.; Tartabini, Paul V.; Scallion, William I.; Murphy, Kelly J.; Covell, Peter F.

    2005-01-01

    NASA has initiated the development of methodologies, techniques and tools needed for analysis and simulation of stage separation of next generation reusable launch vehicles. As a part of this activity, ConSep simulation tool is being developed which is a MATLAB-based front-and-back-end to the commercially available ADAMS(registered Trademark) solver, an industry standard package for solving multi-body dynamic problems. This paper discusses the application of ConSep to the simulation and analysis of staging maneuvers of two-stage-to-orbit (TSTO) Bimese reusable launch vehicles, one staging at Mach 3 and the other at Mach 6. The proximity and isolated aerodynamic database were assembled using the data from wind tunnel tests conducted at NASA Langley Research Center. The effects of parametric variations in mass, inertia, flight path angle, altitude from their nominal values at staging were evaluated. Monte Carlo runs were performed for Mach 3 staging to evaluate the sensitivity to uncertainties in aerodynamic coefficients.

  8. Simulation and Analyses of Stage Separation of Two-Stage Reusable Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Pamadi, Bandu N.; Neirynck, Thomas A.; Hotchko, Nathaniel J.; Tartabini, Paul V.; Scallion, William I.; Murphy, K. J.; Covell, Peter F.

    2007-01-01

    NASA has initiated the development of methodologies, techniques and tools needed for analysis and simulation of stage separation of next generation reusable launch vehicles. As a part of this activity, ConSep simulation tool is being developed which is a MATLAB-based front-and-back-end to the commercially available ADAMS(Registerd TradeMark) solver, an industry standard package for solving multi-body dynamic problems. This paper discusses the application of ConSep to the simulation and analysis of staging maneuvers of two-stage-to-orbit (TSTO) Bimese reusable launch vehicles, one staging at Mach 3 and the other at Mach 6. The proximity and isolated aerodynamic database were assembled using the data from wind tunnel tests conducted at NASA Langley Research Center. The effects of parametric variations in mass, inertia, flight path angle, altitude from their nominal values at staging were evaluated. Monte Carlo runs were performed for Mach 3 staging to evaluate the sensitivity to uncertainties in aerodynamic coefficients.

  9. A Two-Stage Multi-Agent Based Assessment Approach to Enhance Students' Learning Motivation through Negotiated Skills Assessment

    ERIC Educational Resources Information Center

    Chadli, Abdelhafid; Bendella, Fatima; Tranvouez, Erwan

    2015-01-01

    In this paper we present an Agent-based evaluation approach in a context of Multi-agent simulation learning systems. Our evaluation model is based on a two stage assessment approach: (1) a Distributed skill evaluation combining agents and fuzzy sets theory; and (2) a Negotiation based evaluation of students' performance during a training…

  10. Different target-discrimination times can be followed by the same saccade-initiation timing in different stimulus conditions during visual searches

    PubMed Central

    Tanaka, Tomohiro; Nishida, Satoshi

    2015-01-01

    The neuronal processes that underlie visual searches can be divided into two stages: target discrimination and saccade preparation/generation. This predicts that the length of time of the prediscrimination stage varies according to the search difficulty across different stimulus conditions, whereas the length of the latter postdiscrimination stage is stimulus invariant. However, recent studies have suggested that the length of the postdiscrimination interval changes with different stimulus conditions. To address whether and how the visual stimulus affects determination of the postdiscrimination interval, we recorded single-neuron activity in the lateral intraparietal area (LIP) when monkeys (Macaca fuscata) performed a color-singleton search involving four stimulus conditions that differed regarding luminance (Bright vs. Dim) and target-distractor color similarity (Easy vs. Difficult). We specifically focused on comparing activities between the Bright-Difficult and Dim-Easy conditions, in which the visual stimuli were considerably different, but the mean reaction times were indistinguishable. This allowed us to examine the neuronal activity when the difference in the degree of search speed between different stimulus conditions was minimal. We found that not only prediscrimination but also postdiscrimination intervals varied across stimulus conditions: the postdiscrimination interval was longer in the Dim-Easy condition than in the Bright-Difficult condition. Further analysis revealed that the postdiscrimination interval might vary with stimulus luminance. A computer simulation using an accumulation-to-threshold model suggested that the luminance-related difference in visual response strength at discrimination time could be the cause of different postdiscrimination intervals. PMID:25995344

  11. Aero-thermo-dynamic analysis of the Spaceliner-7.1 vehicle in high altitude flight

    NASA Astrophysics Data System (ADS)

    Zuppardi, Gennaro; Morsa, Luigi; Sippel, Martin; Schwanekamp, Tobias

    2014-12-01

    SpaceLiner, designed by DLR, is a visionary, extremely fast passenger transportation concept. It consists of two stages: a winged booster, a vehicle. After separation of the two stages, the booster makes a controlled re-entry and returns to the launch site. According to the current project, version 7-1 of SpaceLiner (SpaceLiner-7.1), the vehicle should be brought at an altitude of 75 km and then released, undertaking the descent path. In the perspective that the vehicle of SpaceLiner-7.1 could be brought to altitudes higher than 75 km, e.g. 100 km or above and also for a speculative purpose, in this paper the aerodynamic parameters of the SpaceLiner-7.1 vehicle are calculated in the whole transition regime, from continuum low density to free molecular flows. Computer simulations have been carried out by three codes: two DSMC codes, DS3V in the altitude interval 100-250 km for the evaluation of the global aerodynamic coefficients and DS2V at the altitude of 60 km for the evaluation of the heat flux and pressure distributions along the vehicle nose, and the DLR HOTSOSE code for the evaluation of the global aerodynamic coefficients in continuum, hypersonic flow at the altitude of 44.6 km. The effectiveness of the flaps with deflection angle of -35 deg. was evaluated in the above mentioned altitude interval. The vehicle showed longitudinal stability in the whole altitude interval even with no flap. The global bridging formulae verified to be proper for the evaluation of the aerodynamic coefficients in the altitude interval 80-100 km where the computations cannot be fulfilled either by CFD, because of the failure of the classical equations computing the transport coefficients, or by DSMC because of the requirement of very high computer resources both in terms of the core storage (a high number of simulated molecules is needed) and to the very long processing time.

  12. A two-stage mixed-integer fuzzy programming with interval-valued membership functions approach for flood-diversion planning.

    PubMed

    Wang, S; Huang, G H

    2013-03-15

    Flood disasters have been extremely severe in recent decades, and they account for about one third of all natural catastrophes throughout the world. In this study, a two-stage mixed-integer fuzzy programming with interval-valued membership functions (TMFP-IMF) approach is developed for flood-diversion planning under uncertainty. TMFP-IMF integrates the fuzzy flexible programming, two-stage stochastic programming, and integer programming within a general framework. A concept of interval-valued fuzzy membership function is introduced to address complexities of system uncertainties. TMFP-IMF can not only deal with uncertainties expressed as fuzzy sets and probability distributions, but also incorporate pre-regulated water-diversion policies directly into its optimization process. TMFP-IMF is applied to a hypothetical case study of flood-diversion planning for demonstrating its applicability. Results indicate that reasonable solutions can be generated for binary and continuous variables. A variety of flood-diversion and capacity-expansion schemes can be obtained under four scenarios, which enable decision makers (DMs) to identify the most desired one based on their perceptions and attitudes towards the objective-function value and constraints. Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. Theoretical and experimental investigations on the cooling capacity distributions at the stages in the thermally-coupled two-stage Stirling-type pulse tube cryocooler without external precooling

    NASA Astrophysics Data System (ADS)

    Tan, Jun; Dang, Haizheng

    2017-03-01

    The two-stage Stirling-type pulse tube cryocooler (SPTC) has advantages in simultaneously providing the cooling powers at two different temperatures, and the capacity in distributing these cooling capacities between the stages is significant to its practical applications. In this paper, a theoretical model of the thermally-coupled two-stage SPTC without external precooling is established based on the electric circuit analogy with considering real gas effects, and the simulations of both the cooling performances and PV power distribution between stages are conducted. The results indicate that the PV power is inversely proportional to the acoustic impedance of each stage, and the cooling capacity distribution is determined by the cold finger cooling efficiency and the PV power into each stage together. The design methods of the cold fingers to achieve both the desired PV power and the cooling capacity distribution between the stages are summarized. The two-stage SPTC is developed and tested based on the above theoretical investigations, and the experimental results show that it can simultaneously achieve 0.69 W at 30 K and 3.1 W at 85 K with an electric input power of 330 W and a reject temperature of 300 K. The consistency between the simulated and the experimental results is observed and the theoretical investigations are experimentally verified.

  14. Plasma volume losses during simulated weightlessness in women

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Drew, H.; Fortney, S.; La France, N.

    Six healthy women not using oral contraceptives underwent two 11-day intervals of complete bedrest (BR) with the BR periods separated by 4 weeks of ambulatory control. Change in plasma volume (PV) was monitored during BR to test the hypothesis that these women would show a smaller decrease in PV than PV values reported in similarly stressed men due to the water retaining effects of the female hormones. Bedrest periods were timed to coincide with opposing stages of the menstrual cycle in each woman. The menstrual cycle was divided into 4 separate stages; early follicular, ovulatory, early luteal, and late lutealmore » phases. The percent decrease of PV showed a consistent decrease for each who began BR while in stage 1, 3 or 4 of the menstrual cycle. However, the females who began in stage 2 showed a transient attenuation in PV loss. Overall, PV changes seen in women during BR were similar to those reported for men. The water-retaining effects of menstrual hormones were evident only during the high estrogen ovulatory stage. The authors conclude the protective effects of menstrual hormones on PV losses during simulated weightless conditions appear to be only small and transient.« less

  15. The impact of secondary-task type on the sensitivity of reaction-time based measurement of cognitive load for novices learning surgical skills using simulation.

    PubMed

    Rojas, David; Haji, Faizal; Shewaga, Rob; Kapralos, Bill; Dubrowski, Adam

    2014-01-01

    Interest in the measurement of cognitive load (CL) in simulation-based education has grown in recent years. In this paper we present two pilot experiments comparing the sensitivity of two reaction time based secondary task measures of CL. The results suggest that simple reaction time measures are sensitive enough to detect changes in CL experienced by novice learners in the initial stages of simulation-based surgical skills training.

  16. Characterisation of two-stage ignition in diesel engine-relevant thermochemical conditions using direct numerical simulation

    DOE PAGES

    Krisman, Alex; Hawkes, Evatt R.; Talei, Mohsen; ...

    2016-08-30

    With the goal of providing a more detailed fundamental understanding of ignition processes in diesel engines, this study reports analysis of a direct numerical simulation (DNS) database. In the DNS, a pseudo turbulent mixing layer of dimethyl ether (DME) at 400 K and air at 900 K is simulated at a pressure of 40 atmospheres. At these conditions, DME exhibits a two-stage ignition and resides within the negative temperature coefficient (NTC) regime of ignition delay times, similar to diesel fuel. The analysis reveals a complex ignition process with several novel features. Autoignition occurs as a distributed, two-stage event. The high-temperaturemore » stage of ignition establishes edge flames that have a hybrid premixed/autoignition flame structure similar to that previously observed for lifted laminar flames at similar thermochemical conditions. In conclusion, a combustion mode analysis based on key radical species illustrates the multi-stage and multi-mode nature of the ignition process and highlights the substantial modelling challenge presented by diesel combustion.« less

  17. Accelerated Monte Carlo Simulation on the Chemical Stage in Water Radiolysis using GPU

    PubMed Central

    Tian, Zhen; Jiang, Steve B.; Jia, Xun

    2018-01-01

    The accurate simulation of water radiolysis is an important step to understand the mechanisms of radiobiology and quantitatively test some hypotheses regarding radiobiological effects. However, the simulation of water radiolysis is highly time consuming, taking hours or even days to be completed by a conventional CPU processor. This time limitation hinders cell-level simulations for a number of research studies. We recently initiated efforts to develop gMicroMC, a GPU-based fast microscopic MC simulation package for water radiolysis. The first step of this project focused on accelerating the simulation of the chemical stage, the most time consuming stage in the entire water radiolysis process. A GPU-friendly parallelization strategy was designed to address the highly correlated many-body simulation problem caused by the mutual competitive chemical reactions between the radiolytic molecules. Two cases were tested, using a 750 keV electron and a 5 MeV proton incident in pure water, respectively. The time-dependent yields of all the radiolytic species during the chemical stage were used to evaluate the accuracy of the simulation. The relative differences between our simulation and the Geant4-DNA simulation were on average 5.3% and 4.4% for the two cases. Our package, executed on an Nvidia Titan black GPU card, successfully completed the chemical stage simulation of the two cases within 599.2 s and 489.0 s. As compared with Geant4-DNA that was executed on an Intel i7-5500U CPU processor and needed 28.6 h and 26.8 h for the two cases using a single CPU core, our package achieved a speed-up factor of 171.1-197.2. PMID:28323637

  18. Accelerated Monte Carlo simulation on the chemical stage in water radiolysis using GPU

    NASA Astrophysics Data System (ADS)

    Tian, Zhen; Jiang, Steve B.; Jia, Xun

    2017-04-01

    The accurate simulation of water radiolysis is an important step to understand the mechanisms of radiobiology and quantitatively test some hypotheses regarding radiobiological effects. However, the simulation of water radiolysis is highly time consuming, taking hours or even days to be completed by a conventional CPU processor. This time limitation hinders cell-level simulations for a number of research studies. We recently initiated efforts to develop gMicroMC, a GPU-based fast microscopic MC simulation package for water radiolysis. The first step of this project focused on accelerating the simulation of the chemical stage, the most time consuming stage in the entire water radiolysis process. A GPU-friendly parallelization strategy was designed to address the highly correlated many-body simulation problem caused by the mutual competitive chemical reactions between the radiolytic molecules. Two cases were tested, using a 750 keV electron and a 5 MeV proton incident in pure water, respectively. The time-dependent yields of all the radiolytic species during the chemical stage were used to evaluate the accuracy of the simulation. The relative differences between our simulation and the Geant4-DNA simulation were on average 5.3% and 4.4% for the two cases. Our package, executed on an Nvidia Titan black GPU card, successfully completed the chemical stage simulation of the two cases within 599.2 s and 489.0 s. As compared with Geant4-DNA that was executed on an Intel i7-5500U CPU processor and needed 28.6 h and 26.8 h for the two cases using a single CPU core, our package achieved a speed-up factor of 171.1-197.2.

  19. Accelerated Monte Carlo simulation on the chemical stage in water radiolysis using GPU.

    PubMed

    Tian, Zhen; Jiang, Steve B; Jia, Xun

    2017-04-21

    The accurate simulation of water radiolysis is an important step to understand the mechanisms of radiobiology and quantitatively test some hypotheses regarding radiobiological effects. However, the simulation of water radiolysis is highly time consuming, taking hours or even days to be completed by a conventional CPU processor. This time limitation hinders cell-level simulations for a number of research studies. We recently initiated efforts to develop gMicroMC, a GPU-based fast microscopic MC simulation package for water radiolysis. The first step of this project focused on accelerating the simulation of the chemical stage, the most time consuming stage in the entire water radiolysis process. A GPU-friendly parallelization strategy was designed to address the highly correlated many-body simulation problem caused by the mutual competitive chemical reactions between the radiolytic molecules. Two cases were tested, using a 750 keV electron and a 5 MeV proton incident in pure water, respectively. The time-dependent yields of all the radiolytic species during the chemical stage were used to evaluate the accuracy of the simulation. The relative differences between our simulation and the Geant4-DNA simulation were on average 5.3% and 4.4% for the two cases. Our package, executed on an Nvidia Titan black GPU card, successfully completed the chemical stage simulation of the two cases within 599.2 s and 489.0 s. As compared with Geant4-DNA that was executed on an Intel i7-5500U CPU processor and needed 28.6 h and 26.8 h for the two cases using a single CPU core, our package achieved a speed-up factor of 171.1-197.2.

  20. Translating Climate Projections for Bridge Engineering

    NASA Astrophysics Data System (ADS)

    Anderson, C.; Takle, E. S.; Krajewski, W.; Mantilla, R.; Quintero, F.

    2015-12-01

    A bridge vulnerability pilot study was conducted by Iowa Department of Transportation (IADOT) as one of nineteen pilots supported by the Federal Highway Administration Climate Change Resilience Pilots. Our pilot study team consisted of the IADOT senior bridge engineer who is the preliminary design section leader as well as climate and hydrological scientists. The pilot project culminated in a visual graphic designed by the bridge engineer (Figure 1), and an evaluation framework for bridge engineering design. The framework has four stages. The first two stages evaluate the spatial and temporal resolution needed in climate projection data in order to be suitable for input to a hydrology model. The framework separates streamflow simulation error into errors from the streamflow model and from the coarseness of input weather data series. In the final two stages, the framework evaluates credibility of climate projection streamflow simulations. Using an empirically downscaled data set, projection streamflow is generated. Error is computed in two time frames: the training period of the empirical downscaling methodology, and an out-of-sample period. If large errors in projection streamflow were observed during the training period, it would indicate low accuracy and, therefore, low credibility. If large errors in streamflow were observed during the out-of-sample period, it would mean the approach may not include some causes of change and, therefore, the climate projections would have limited credibility for setting expectations for changes. We address uncertainty with confidence intervals on quantiles of streamflow discharge. The results show the 95% confidence intervals have significant overlap. Nevertheless, the use of confidence intervals enabled engineering judgement. In our discussions, we noted the consistency in direction of change across basins, though the flood mechanism was different across basins, and the high bound of bridge lifetime period quantiles exceeded that of the historical period. This suggested the change was not isolated, and it systemically altered the risk profile. One suggestion to incorporate engineering judgement was to consider degrees of vulnerability using the median discharge of the historical period and the upper bound discharge for the bridge lifetime period.

  1. Simulation-based power calculations for planning a two-stage individual participant data meta-analysis.

    PubMed

    Ensor, Joie; Burke, Danielle L; Snell, Kym I E; Hemming, Karla; Riley, Richard D

    2018-05-18

    Researchers and funders should consider the statistical power of planned Individual Participant Data (IPD) meta-analysis projects, as they are often time-consuming and costly. We propose simulation-based power calculations utilising a two-stage framework, and illustrate the approach for a planned IPD meta-analysis of randomised trials with continuous outcomes where the aim is to identify treatment-covariate interactions. The simulation approach has four steps: (i) specify an underlying (data generating) statistical model for trials in the IPD meta-analysis; (ii) use readily available information (e.g. from publications) and prior knowledge (e.g. number of studies promising IPD) to specify model parameter values (e.g. control group mean, intervention effect, treatment-covariate interaction); (iii) simulate an IPD meta-analysis dataset of a particular size from the model, and apply a two-stage IPD meta-analysis to obtain the summary estimate of interest (e.g. interaction effect) and its associated p-value; (iv) repeat the previous step (e.g. thousands of times), then estimate the power to detect a genuine effect by the proportion of summary estimates with a significant p-value. In a planned IPD meta-analysis of lifestyle interventions to reduce weight gain in pregnancy, 14 trials (1183 patients) promised their IPD to examine a treatment-BMI interaction (i.e. whether baseline BMI modifies intervention effect on weight gain). Using our simulation-based approach, a two-stage IPD meta-analysis has < 60% power to detect a reduction of 1 kg weight gain for a 10-unit increase in BMI. Additional IPD from ten other published trials (containing 1761 patients) would improve power to over 80%, but only if a fixed-effect meta-analysis was appropriate. Pre-specified adjustment for prognostic factors would increase power further. Incorrect dichotomisation of BMI would reduce power by over 20%, similar to immediately throwing away IPD from ten trials. Simulation-based power calculations could inform the planning and funding of IPD projects, and should be used routinely.

  2. Modeling of a Sequential Two-Stage Combustor

    NASA Technical Reports Server (NTRS)

    Hendricks, R. C.; Liu, N.-S.; Gallagher, J. R.; Ryder, R. C.; Brankovic, A.; Hendricks, J. A.

    2005-01-01

    A sequential two-stage, natural gas fueled power generation combustion system is modeled to examine the fundamental aerodynamic and combustion characteristics of the system. The modeling methodology includes CAD-based geometry definition, and combustion computational fluid dynamics analysis. Graphical analysis is used to examine the complex vortical patterns in each component, identifying sources of pressure loss. The simulations demonstrate the importance of including the rotating high-pressure turbine blades in the computation, as this results in direct computation of combustion within the first turbine stage, and accurate simulation of the flow in the second combustion stage. The direct computation of hot-streaks through the rotating high-pressure turbine stage leads to improved understanding of the aerodynamic relationships between the primary and secondary combustors and the turbomachinery.

  3. Likelihood-based confidence intervals for estimating floods with given return periods

    NASA Astrophysics Data System (ADS)

    Martins, Eduardo Sávio P. R.; Clarke, Robin T.

    1993-06-01

    This paper discusses aspects of the calculation of likelihood-based confidence intervals for T-year floods, with particular reference to (1) the two-parameter gamma distribution; (2) the Gumbel distribution; (3) the two-parameter log-normal distribution, and other distributions related to the normal by Box-Cox transformations. Calculation of the confidence limits is straightforward using the Nelder-Mead algorithm with a constraint incorporated, although care is necessary to ensure convergence either of the Nelder-Mead algorithm, or of the Newton-Raphson calculation of maximum-likelihood estimates. Methods are illustrated using records from 18 gauging stations in the basin of the River Itajai-Acu, State of Santa Catarina, southern Brazil. A small and restricted simulation compared likelihood-based confidence limits with those given by use of the central limit theorem; for the same confidence probability, the confidence limits of the simulation were wider than those of the central limit theorem, which failed more frequently to contain the true quantile being estimated. The paper discusses possible applications of likelihood-based confidence intervals in other areas of hydrological analysis.

  4. Ultrasound-based follow-up does not increase survival in early-stage melanoma patients: A comparative cohort study.

    PubMed

    Ribero, S; Podlipnik, S; Osella-Abate, S; Sportoletti-Baduel, E; Manubens, E; Barreiro, A; Caliendo, V; Chavez-Bourgeois, M; Carrera, C; Cassoni, P; Malvehy, J; Fierro, M T; Puig, S

    2017-11-01

    Different protocols have been used to follow up melanoma patients in stage I-II. However, there is no consensus on the complementary tests that should be requested or the appropriate intervals between visits. Our aim is to compare an ultrasound-based follow-up with a clinical follow-up. Analysis of two prospectively collected cohorts of melanoma patients in stage IB-IIA from two tertiary referral centres in Barcelona (clinical-based follow-up [C-FU]) and Turin (ultrasound-based follow-up [US-FU]). Kaplan-Meier curves were used to evaluate distant metastases-free survival (DMFS), disease-free interval (DFI), nodal metastases-free survival (NMFS) and melanoma-specific survival (MSS). A total of 1149 patients in the American Joint Committee on Cancer stage IB and IIA were included in this study, of which 554 subjects (48%) were enrolled for a C-FU, and 595 patients (52%) received a protocolised US-FU. The median age was 53.8 years (interquartile range [IQR] 41.5-65.2) with a median follow-up time of 4.14 years (IQR 1.2-7.6). During follow-up, 69 patients (12.5%) in C-FU and 72 patients (12.1%) in US-FU developed disease progression. Median time to relapse for the first metastatic site was 2.11 years (IQR 1.14-4.04) for skin metastases, 1.32 (IQR 0.57-3.29) for lymph node metastases and 2.84 (IQR 1.32-4.60) for distant metastases. The pattern of progression and the total proportion of metastases were not significantly different (P = .44) in the two centres. No difference in DFI, DMFS, NMFS and MSS was found between the two cohorts. Ultrasound-based follow-up does not increase the survival of melanoma patients in stage IB-IIA. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Assessing the Accuracy of Classwide Direct Observation Methods: Two Analyses Using Simulated and Naturalistic Data

    ERIC Educational Resources Information Center

    Dart, Evan H.; Radley, Keith C.; Briesch, Amy M.; Furlow, Christopher M.; Cavell, Hannah J.; Briesch, Amy M.

    2016-01-01

    Two studies investigated the accuracy of eight different interval-based group observation methods that are commonly used to assess the effects of classwide interventions. In Study 1, a Microsoft Visual Basic program was created to simulate a large set of observational data. Binary data were randomly generated at the student level to represent…

  6. Risk management for sulfur dioxide abatement under multiple uncertainties

    NASA Astrophysics Data System (ADS)

    Dai, C.; Sun, W.; Tan, Q.; Liu, Y.; Lu, W. T.; Guo, H. C.

    2016-03-01

    In this study, interval-parameter programming, two-stage stochastic programming (TSP), and conditional value-at-risk (CVaR) were incorporated into a general optimization framework, leading to an interval-parameter CVaR-based two-stage programming (ICTP) method. The ICTP method had several advantages: (i) its objective function simultaneously took expected cost and risk cost into consideration, and also used discrete random variables and discrete intervals to reflect uncertain properties; (ii) it quantitatively evaluated the right tail of distributions of random variables which could better calculate the risk of violated environmental standards; (iii) it was useful for helping decision makers to analyze the trade-offs between cost and risk; and (iv) it was effective to penalize the second-stage costs, as well as to capture the notion of risk in stochastic programming. The developed model was applied to sulfur dioxide abatement in an air quality management system. The results indicated that the ICTP method could be used for generating a series of air quality management schemes under different risk-aversion levels, for identifying desired air quality management strategies for decision makers, and for considering a proper balance between system economy and environmental quality.

  7. Development of Constraint Force Equation Methodology for Application to Multi-Body Dynamics Including Launch Vehicle Stage Seperation

    NASA Technical Reports Server (NTRS)

    Pamadi, Bandu N.; Toniolo, Matthew D.; Tartabini, Paul V.; Roithmayr, Carlos M.; Albertson, Cindy W.; Karlgaard, Christopher D.

    2016-01-01

    The objective of this report is to develop and implement a physics based method for analysis and simulation of multi-body dynamics including launch vehicle stage separation. The constraint force equation (CFE) methodology discussed in this report provides such a framework for modeling constraint forces and moments acting at joints when the vehicles are still connected. Several stand-alone test cases involving various types of joints were developed to validate the CFE methodology. The results were compared with ADAMS(Registered Trademark) and Autolev, two different industry standard benchmark codes for multi-body dynamic analysis and simulations. However, these two codes are not designed for aerospace flight trajectory simulations. After this validation exercise, the CFE algorithm was implemented in Program to Optimize Simulated Trajectories II (POST2) to provide a capability to simulate end-to-end trajectories of launch vehicles including stage separation. The POST2/CFE methodology was applied to the STS-1 Space Shuttle solid rocket booster (SRB) separation and Hyper-X Research Vehicle (HXRV) separation from the Pegasus booster as a further test and validation for its application to launch vehicle stage separation problems. Finally, to demonstrate end-to-end simulation capability, POST2/CFE was applied to the ascent, orbit insertion, and booster return of a reusable two-stage-to-orbit (TSTO) vehicle concept. With these validation exercises, POST2/CFE software can be used for performing conceptual level end-to-end simulations, including launch vehicle stage separation, for problems similar to those discussed in this report.

  8. Point estimation following two-stage adaptive threshold enrichment clinical trials.

    PubMed

    Kimani, Peter K; Todd, Susan; Renfro, Lindsay A; Stallard, Nigel

    2018-05-31

    Recently, several study designs incorporating treatment effect assessment in biomarker-based subpopulations have been proposed. Most statistical methodologies for such designs focus on the control of type I error rate and power. In this paper, we have developed point estimators for clinical trials that use the two-stage adaptive enrichment threshold design. The design consists of two stages, where in stage 1, patients are recruited in the full population. Stage 1 outcome data are then used to perform interim analysis to decide whether the trial continues to stage 2 with the full population or a subpopulation. The subpopulation is defined based on one of the candidate threshold values of a numerical predictive biomarker. To estimate treatment effect in the selected subpopulation, we have derived unbiased estimators, shrinkage estimators, and estimators that estimate bias and subtract it from the naive estimate. We have recommended one of the unbiased estimators. However, since none of the estimators dominated in all simulation scenarios based on both bias and mean squared error, an alternative strategy would be to use a hybrid estimator where the estimator used depends on the subpopulation selected. This would require a simulation study of plausible scenarios before the trial. © 2018 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  9. Development of a spatially-distributed hydroecological model to simulate cottonwood seedling recruitment along rivers.

    PubMed

    Benjankar, Rohan; Burke, Michael; Yager, Elowyn; Tonina, Daniele; Egger, Gregory; Rood, Stewart B; Merz, Norm

    2014-12-01

    Dam operations have altered flood and flow patterns and prevented successful cottonwood seedling recruitment along many rivers. To guide reservoir flow releases to meet cottonwood recruitment needs, we developed a spatially-distributed, GIS-based model that analyzes the hydrophysical requirements for cottonwood recruitment. These requirements are indicated by five physical parameters: (1) annual peak flow timing relative to the interval of seed dispersal, (2) shear stress, which characterizes disturbance, (3) local stage recession after seedling recruitment, (4) recruitment elevation above base flow stage, and (5) duration of winter flooding, which may contribute to seedling mortality. The model categorizes the potential for cottonwood recruitment in four classes and attributes a suitability value at each individual spatial location. The model accuracy was estimated with an error matrix analysis by comparing simulated and field-observed recruitment success. The overall accuracies of this Spatially-Distributed Cottonwood Recruitment model were 47% for a braided reach and 68% for a meander reach along the Kootenai River in Idaho, USA. Model accuracies increased to 64% and 72%, respectively, when fewer favorability classes were considered. The model predicted areas of similarly favorable recruitment potential for 1997 and 2006, two recent years with successful cottonwood recruitment. This model should provide a useful tool to quantify impacts of human activities and climatic variability on cottonwood recruitment, and to prescribe instream flow regimes for the conservation and restoration of riparian woodlands. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Designing efficient nitrous oxide sampling strategies in agroecosystems using simulation models

    NASA Astrophysics Data System (ADS)

    Saha, Debasish; Kemanian, Armen R.; Rau, Benjamin M.; Adler, Paul R.; Montes, Felipe

    2017-04-01

    Annual cumulative soil nitrous oxide (N2O) emissions calculated from discrete chamber-based flux measurements have unknown uncertainty. We used outputs from simulations obtained with an agroecosystem model to design sampling strategies that yield accurate cumulative N2O flux estimates with a known uncertainty level. Daily soil N2O fluxes were simulated for Ames, IA (corn-soybean rotation), College Station, TX (corn-vetch rotation), Fort Collins, CO (irrigated corn), and Pullman, WA (winter wheat), representing diverse agro-ecoregions of the United States. Fertilization source, rate, and timing were site-specific. These simulated fluxes surrogated daily measurements in the analysis. We ;sampled; the fluxes using a fixed interval (1-32 days) or a rule-based (decision tree-based) sampling method. Two types of decision trees were built: a high-input tree (HI) that included soil inorganic nitrogen (SIN) as a predictor variable, and a low-input tree (LI) that excluded SIN. Other predictor variables were identified with Random Forest. The decision trees were inverted to be used as rules for sampling a representative number of members from each terminal node. The uncertainty of the annual N2O flux estimation increased along with the fixed interval length. A 4- and 8-day fixed sampling interval was required at College Station and Ames, respectively, to yield ±20% accuracy in the flux estimate; a 12-day interval rendered the same accuracy at Fort Collins and Pullman. Both the HI and the LI rule-based methods provided the same accuracy as that of fixed interval method with up to a 60% reduction in sampling events, particularly at locations with greater temporal flux variability. For instance, at Ames, the HI rule-based and the fixed interval methods required 16 and 91 sampling events, respectively, to achieve the same absolute bias of 0.2 kg N ha-1 yr-1 in estimating cumulative N2O flux. These results suggest that using simulation models along with decision trees can reduce the cost and improve the accuracy of the estimations of cumulative N2O fluxes using the discrete chamber-based method.

  11. A novel flow sensor based on resonant sensing with two-stage microleverage mechanism.

    PubMed

    Yang, B; Guo, X; Wang, Q H; Lu, C F; Hu, D

    2018-04-01

    The design, simulation, fabrication, and experiments of a novel flow sensor based on resonant sensing with a two-stage microleverage mechanism are presented in this paper. Different from the conventional detection methods for flow sensors, two differential resonators are adopted to implement air flow rate transformation through two-stage leverage magnification. The proposed flow sensor has a high sensitivity since the adopted two-stage microleverage mechanism possesses a higher amplification factor than a single-stage microleverage mechanism. The modal distribution and geometric dimension of the two-stage leverage mechanism and hair are analyzed and optimized by Ansys simulation. A digital closed-loop driving technique with a phase frequency detector-based coordinate rotation digital computer algorithm is implemented for the detection and locking of resonance frequency. The sensor fabricated by the standard deep dry silicon on a glass process has a device dimension of 5100 μm (length) × 5100 μm (width) × 100 μm (height) with a hair diameter of 1000 μm. The preliminary experimental results demonstrate that the maximal mechanical sensitivity of the flow sensor is approximately 7.41 Hz/(m/s) 2 at a resonant frequency of 22 kHz for the hair height of 9 mm and increases by 2.42 times as hair height extends from 3 mm to 9 mm. Simultaneously, a detection-limit of 3.23 mm/s air flow amplitude at 60 Hz is confirmed. The proposed flow sensor has great application prospects in the micro-autonomous system and technology, self-stabilizing micro-air vehicles, and environmental monitoring.

  12. A novel flow sensor based on resonant sensing with two-stage microleverage mechanism

    NASA Astrophysics Data System (ADS)

    Yang, B.; Guo, X.; Wang, Q. H.; Lu, C. F.; Hu, D.

    2018-04-01

    The design, simulation, fabrication, and experiments of a novel flow sensor based on resonant sensing with a two-stage microleverage mechanism are presented in this paper. Different from the conventional detection methods for flow sensors, two differential resonators are adopted to implement air flow rate transformation through two-stage leverage magnification. The proposed flow sensor has a high sensitivity since the adopted two-stage microleverage mechanism possesses a higher amplification factor than a single-stage microleverage mechanism. The modal distribution and geometric dimension of the two-stage leverage mechanism and hair are analyzed and optimized by Ansys simulation. A digital closed-loop driving technique with a phase frequency detector-based coordinate rotation digital computer algorithm is implemented for the detection and locking of resonance frequency. The sensor fabricated by the standard deep dry silicon on a glass process has a device dimension of 5100 μm (length) × 5100 μm (width) × 100 μm (height) with a hair diameter of 1000 μm. The preliminary experimental results demonstrate that the maximal mechanical sensitivity of the flow sensor is approximately 7.41 Hz/(m/s)2 at a resonant frequency of 22 kHz for the hair height of 9 mm and increases by 2.42 times as hair height extends from 3 mm to 9 mm. Simultaneously, a detection-limit of 3.23 mm/s air flow amplitude at 60 Hz is confirmed. The proposed flow sensor has great application prospects in the micro-autonomous system and technology, self-stabilizing micro-air vehicles, and environmental monitoring.

  13. Asymptotic confidence intervals for the Pearson correlation via skewness and kurtosis.

    PubMed

    Bishara, Anthony J; Li, Jiexiang; Nash, Thomas

    2018-02-01

    When bivariate normality is violated, the default confidence interval of the Pearson correlation can be inaccurate. Two new methods were developed based on the asymptotic sampling distribution of Fisher's z' under the general case where bivariate normality need not be assumed. In Monte Carlo simulations, the most successful of these methods relied on the (Vale & Maurelli, 1983, Psychometrika, 48, 465) family to approximate a distribution via the marginal skewness and kurtosis of the sample data. In Simulation 1, this method provided more accurate confidence intervals of the correlation in non-normal data, at least as compared to no adjustment of the Fisher z' interval, or to adjustment via the sample joint moments. In Simulation 2, this approximate distribution method performed favourably relative to common non-parametric bootstrap methods, but its performance was mixed relative to an observed imposed bootstrap and two other robust methods (PM1 and HC4). No method was completely satisfactory. An advantage of the approximate distribution method, though, is that it can be implemented even without access to raw data if sample skewness and kurtosis are reported, making the method particularly useful for meta-analysis. Supporting information includes R code. © 2017 The British Psychological Society.

  14. An Evaluation of One- and Three-Parameter Logistic Tailored Testing Procedures for Use with Small Item Pools.

    ERIC Educational Resources Information Center

    McKinley, Robert L.; Reckase, Mark D.

    A two-stage study was conducted to compare the ability estimates yielded by tailored testing procedures based on the one-parameter logistic (1PL) and three-parameter logistic (3PL) models. The first stage of the study employed real data, while the second stage employed simulated data. In the first stage, response data for 3,000 examinees were…

  15. Optimal debulking targets in women with advanced stage ovarian cancer: a retrospective study of immediate versus interval debulking surgery.

    PubMed

    Altman, Alon D; Nelson, Gregg; Chu, Pamela; Nation, Jill; Ghatage, Prafull

    2012-06-01

    The objective of this study was to examine both overall and disease-free survival of patients with advanced stage ovarian cancer after immediate or interval debulking surgery based on residual disease. We performed a retrospective chart review at the Tom Baker Cancer Centre in Calgary, Alberta of patients with pathologically confirmed stage III or IV ovarian cancer, fallopian tube cancer, or primary peritoneal cancer between 2003 and 2007. We collected data on the dates of diagnosis, recurrence, and death; cancer stage and grade, patients' age, surgery performed, and residual disease. One hundred ninety-two patients were included in the final analysis. The optimal debulking rate with immediate surgery was 64.8%, and with interval surgery it was 85.9%. There were improved overall and disease-free survival rates for optimally debulked disease (< 1 cm) with both immediate and interval surgery (P < 0.001) compared to suboptimally debulked disease. Overall survival rates for optimally debulked disease were not significantly different in patients having immediate and interval surgery (P = 0.25). In the immediate surgery group, patients with microscopic residual disease had better disease-free survival (P = 0.015) and overall survival (P = 0.005) than patients with < 1 cm residual disease. In patients who had interval surgery, those who had microscopic residual disease had more improved disease-free survival than those with < 1 cm disease (P = 0.05), but they did not have more improved overall survival (P = 0.42). Patients with microscopic residual disease who had immediate surgery had a significantly better overall survival rate than those who had interval surgery (P = 0.034). In women with advanced stage ovarian cancer, the goal of surgery should be resection of disease to microscopic residual at the initial procedure. This results in improved overall survival than lesser degrees of resection. Further studies are required to determine optimal surgical management.

  16. Development of the Patient-specific Cardiovascular Modeling System Using Immersed Boundary Technique

    NASA Astrophysics Data System (ADS)

    Tay, Wee-Beng; Lin, Liang-Yu; Tseng, Wen-Yih; Tseng, Yu-Heng

    2010-05-01

    A computational fluid dynamics (CFD) based, patient-specific cardiovascular modeling system is under-developed. The system can identify possible diseased conditions and facilitate physicians' diagnosis at early stage through the hybrid CFD simulation and time-resolved magnetic resonance imaging (MRI). The CFD simulation is initially based on the three-dimensional heart model developed by McQueen and Peskin, which can simultaneously compute fluid motions and elastic boundary motions using the immersed boundary method. We extend and improve the three-dimensional heart model for the clinical application by including the patient-specific hemodynamic information. The flow features in the ventricles and their responses are investigated under different inflow and outflow conditions during diastole and systole phases based on the quasi-realistic heart model, which takes advantage of the observed flow scenarios. Our results indicate distinct differences between the two groups of participants, including the vortex formation process in the left ventricle (LV), as well as the flow rate distributions at different identified sources such as the aorta, vena cava and pulmonary veins/artery. We further identify some key parameters which may affect the vortex formation in the LV. Thus it is hypothesized that disease-related dysfunctions in intervals before complete heart failure can be observed in the dynamics of transmitral blood flow during early LV diastole.

  17. Quantification of gross tumour volume changes between simulation and first day of radiotherapy for patients with locally advanced malignancies of the lung and head/neck.

    PubMed

    Kishan, Amar U; Cui, Jing; Wang, Pin-Chieh; Daly, Megan E; Purdy, James A; Chen, Allen M

    2014-10-01

    To quantify changes in gross tumour volume (GTV) between simulation and initiation of radiotherapy in patients with locally advanced malignancies of the lung and head/neck. Initial cone beam computed tomography (CT) scans from 12 patients with lung cancer and 12 with head/neck cancer (head and neck squamous cell carcinoma (HNSCC)) treated with intensity-modulated radiotherapy with image guidance were rigidly registered to the simulation CT scans. The GTV was demarcated on both scans. The relationship between percent GTV change and variables including time interval between simulation and start, tumour (T) stage, and absolute weight change was assessed. For lung cancer patients, the GTV increased a median of 35.06% (range, -16.63% to 229.97%) over a median interval of 13 days (range, 7-43), while for HNSCC patients, the median GTV increase was 16.04% (range, -8.03% to 47.41%) over 13 days (range, 7-40). These observed changes are statistically significant. The magnitude of this change was inversely associated with the size of the tumour on the simulation scan for lung cancer patients (P < 0.05). However, the observed changes in GTV did not correlate with the duration of the interval for either disease site. Similarly, T stage, absolute weight change and histologic type (the latter for lung cancer cases) did not correlate with degree of GTV change (P > 0.1). While the observed changes in GTV were moderate from the time of simulation to start of radiotherapy, these findings underscore the importance of image guidance for target localisation and verification, particularly for smaller tumours. Minimising the delay between simulation and treatment initiation may also be beneficial. © 2014 The Royal Australian and New Zealand College of Radiologists.

  18. The black soldier-fly, Hermetia illucens (Diptera, Stratiomyidae), used to estimate the postmortem interval in a case in Amapá State, Brazil.

    PubMed

    Pujol-Luz, José R; Francez, Pablo Abdon da Costa; Ururahy-Rodrigues, Alexandre; Constantino, Reginaldo

    2008-03-01

    The black soldier-fly (Hermetia illucens) is a generalist detritivore which is commonly present in corpses in later stages of decomposition and may be useful in forensic entomology. This paper describes the estimation of the postmortem interval (PMI) based on the life cycle of the black soldier-fly in a case in northern Brazil. A male child was abducted from his home and 42 days later his corpse was found in an advanced stage of decay. Two black soldier-fly larvae were found associated with the body. The larvae emerged as adults after 25-26 days. Considering the development cycle of H. illucens, the date of oviposition was estimated as 24-25 days after abduction. Since H. illucens usually (but not always) colonizes corpses in more advanced stages of decay, this estimate is consistent with the hypothesis that the child was killed immediately after abduction.

  19. Mosquito population dynamics from cellular automata-based simulation

    NASA Astrophysics Data System (ADS)

    Syafarina, Inna; Sadikin, Rifki; Nuraini, Nuning

    2016-02-01

    In this paper we present an innovative model for simulating mosquito-vector population dynamics. The simulation consist of two stages: demography and dispersal dynamics. For demography simulation, we follow the existing model for modeling a mosquito life cycles. Moreover, we use cellular automata-based model for simulating dispersal of the vector. In simulation, each individual vector is able to move to other grid based on a random walk. Our model is also capable to represent immunity factor for each grid. We simulate the model to evaluate its correctness. Based on the simulations, we can conclude that our model is correct. However, our model need to be improved to find a realistic parameters to match real data.

  20. The impact of individual-level heterogeneity on estimated infectious disease burden: a simulation study.

    PubMed

    McDonald, Scott A; Devleesschauwer, Brecht; Wallinga, Jacco

    2016-12-08

    Disease burden is not evenly distributed within a population; this uneven distribution can be due to individual heterogeneity in progression rates between disease stages. Composite measures of disease burden that are based on disease progression models, such as the disability-adjusted life year (DALY), are widely used to quantify the current and future burden of infectious diseases. Our goal was to investigate to what extent ignoring the presence of heterogeneity could bias DALY computation. Simulations using individual-based models for hypothetical infectious diseases with short and long natural histories were run assuming either "population-averaged" progression probabilities between disease stages, or progression probabilities that were influenced by an a priori defined individual-level frailty (i.e., heterogeneity in disease risk) distribution, and DALYs were calculated. Under the assumption of heterogeneity in transition rates and increasing frailty with age, the short natural history disease model predicted 14% fewer DALYs compared with the homogenous population assumption. Simulations of a long natural history disease indicated that assuming homogeneity in transition rates when heterogeneity was present could overestimate total DALYs, in the present case by 4% (95% quantile interval: 1-8%). The consequences of ignoring population heterogeneity should be considered when defining transition parameters for natural history models and when interpreting the resulting disease burden estimates.

  1. Follow-up of early stage melanoma: specialist clinician perspectives on the functions of follow-up and implications for extending follow-up intervals.

    PubMed

    Rychetnik, Lucie; McCaffery, Kirsten; Morton, Rachael L; Thompson, John F; Menzies, Scott W; Irwig, Les

    2013-04-01

    There is limited evidence on the relative effectiveness of different follow-up schedules for patients with AJCC stage I or II melanoma, but less frequent follow-up than is currently recommended has been proposed. To describe melanoma clinicians' perspectives on the functions of follow-up, factors that influence follow-up intervals, and important considerations for extending intervals. Qualitative interviews with 16 clinicians (surgical oncologists, dermatologists, melanoma unit physicians) who conduct follow-up at two of Australia's largest specialist centers. Follow-up is conducted for early detection of recurrences or new primary melanomas, to manage patient anxiety, support patient self-care, and as part of shared care. Recommended intervals are based on guidelines but account for each patient's clinical risk profile, level of anxiety, patient education requirements, capacity to engage in skin self-examination, and how the clinician prefers to manage any suspicious lesions. To revise guidelines and implement change it is important to understand the rationale underpinning existing practice. Extended follow-up intervals for early stage melanoma are more likely to be adopted after the first year when patients are less anxious and sufficiently prepared to conduct self-examination. Clinicians may retain existing schedules for highly anxious patients or those unable to examine themselves. Copyright © 2012 Wiley Periodicals, Inc.

  2. Laser-driven three-stage heavy-ion acceleration from relativistic laser-plasma interaction.

    PubMed

    Wang, H Y; Lin, C; Liu, B; Sheng, Z M; Lu, H Y; Ma, W J; Bin, J H; Schreiber, J; He, X T; Chen, J E; Zepf, M; Yan, X Q

    2014-01-01

    A three-stage heavy ion acceleration scheme for generation of high-energy quasimonoenergetic heavy ion beams is investigated using two-dimensional particle-in-cell simulation and analytical modeling. The scheme is based on the interaction of an intense linearly polarized laser pulse with a compound two-layer target (a front heavy ion layer + a second light ion layer). We identify that, under appropriate conditions, the heavy ions preaccelerated by a two-stage acceleration process in the front layer can be injected into the light ion shock wave in the second layer for a further third-stage acceleration. These injected heavy ions are not influenced by the screening effect from the light ions, and an isolated high-energy heavy ion beam with relatively low-energy spread is thus formed. Two-dimensional particle-in-cell simulations show that ∼100MeV/u quasimonoenergetic Fe24+ beams can be obtained by linearly polarized laser pulses at intensities of 1.1×1021W/cm2.

  3. On Designing Multicore-Aware Simulators for Systems Biology Endowed with OnLine Statistics

    PubMed Central

    Calcagno, Cristina; Coppo, Mario

    2014-01-01

    The paper arguments are on enabling methodologies for the design of a fully parallel, online, interactive tool aiming to support the bioinformatics scientists .In particular, the features of these methodologies, supported by the FastFlow parallel programming framework, are shown on a simulation tool to perform the modeling, the tuning, and the sensitivity analysis of stochastic biological models. A stochastic simulation needs thousands of independent simulation trajectories turning into big data that should be analysed by statistic and data mining tools. In the considered approach the two stages are pipelined in such a way that the simulation stage streams out the partial results of all simulation trajectories to the analysis stage that immediately produces a partial result. The simulation-analysis workflow is validated for performance and effectiveness of the online analysis in capturing biological systems behavior on a multicore platform and representative proof-of-concept biological systems. The exploited methodologies include pattern-based parallel programming and data streaming that provide key features to the software designers such as performance portability and efficient in-memory (big) data management and movement. Two paradigmatic classes of biological systems exhibiting multistable and oscillatory behavior are used as a testbed. PMID:25050327

  4. On designing multicore-aware simulators for systems biology endowed with OnLine statistics.

    PubMed

    Aldinucci, Marco; Calcagno, Cristina; Coppo, Mario; Damiani, Ferruccio; Drocco, Maurizio; Sciacca, Eva; Spinella, Salvatore; Torquati, Massimo; Troina, Angelo

    2014-01-01

    The paper arguments are on enabling methodologies for the design of a fully parallel, online, interactive tool aiming to support the bioinformatics scientists .In particular, the features of these methodologies, supported by the FastFlow parallel programming framework, are shown on a simulation tool to perform the modeling, the tuning, and the sensitivity analysis of stochastic biological models. A stochastic simulation needs thousands of independent simulation trajectories turning into big data that should be analysed by statistic and data mining tools. In the considered approach the two stages are pipelined in such a way that the simulation stage streams out the partial results of all simulation trajectories to the analysis stage that immediately produces a partial result. The simulation-analysis workflow is validated for performance and effectiveness of the online analysis in capturing biological systems behavior on a multicore platform and representative proof-of-concept biological systems. The exploited methodologies include pattern-based parallel programming and data streaming that provide key features to the software designers such as performance portability and efficient in-memory (big) data management and movement. Two paradigmatic classes of biological systems exhibiting multistable and oscillatory behavior are used as a testbed.

  5. An innovative method for offshore wind farm site selection based on the interval number with probability distribution

    NASA Astrophysics Data System (ADS)

    Wu, Yunna; Chen, Kaifeng; Xu, Hu; Xu, Chuanbo; Zhang, Haobo; Yang, Meng

    2017-12-01

    There is insufficient research relating to offshore wind farm site selection in China. The current methods for site selection have some defects. First, information loss is caused by two aspects: the implicit assumption that the probability distribution on the interval number is uniform; and ignoring the value of decision makers' (DMs') common opinion on the criteria information evaluation. Secondly, the difference in DMs' utility function has failed to receive attention. An innovative method is proposed in this article to solve these drawbacks. First, a new form of interval number and its weighted operator are proposed to reflect the uncertainty and reduce information loss. Secondly, a new stochastic dominance degree is proposed to quantify the interval number with a probability distribution. Thirdly, a two-stage method integrating the weighted operator with stochastic dominance degree is proposed to evaluate the alternatives. Finally, a case from China proves the effectiveness of this method.

  6. Confidence intervals for the first crossing point of two hazard functions.

    PubMed

    Cheng, Ming-Yen; Qiu, Peihua; Tan, Xianming; Tu, Dongsheng

    2009-12-01

    The phenomenon of crossing hazard rates is common in clinical trials with time to event endpoints. Many methods have been proposed for testing equality of hazard functions against a crossing hazards alternative. However, there has been relatively few approaches available in the literature for point or interval estimation of the crossing time point. The problem of constructing confidence intervals for the first crossing time point of two hazard functions is considered in this paper. After reviewing a recent procedure based on Cox proportional hazard modeling with Box-Cox transformation of the time to event, a nonparametric procedure using the kernel smoothing estimate of the hazard ratio is proposed. The proposed procedure and the one based on Cox proportional hazard modeling with Box-Cox transformation of the time to event are both evaluated by Monte-Carlo simulations and applied to two clinical trial datasets.

  7. Recourse-based facility-location problems in hybrid uncertain environment.

    PubMed

    Wang, Shuming; Watada, Junzo; Pedrycz, Witold

    2010-08-01

    The objective of this paper is to study facility-location problems in the presence of a hybrid uncertain environment involving both randomness and fuzziness. A two-stage fuzzy-random facility-location model with recourse (FR-FLMR) is developed in which both the demands and costs are assumed to be fuzzy-random variables. The bounds of the optimal objective value of the two-stage FR-FLMR are derived. As, in general, the fuzzy-random parameters of the FR-FLMR can be regarded as continuous fuzzy-random variables with an infinite number of realizations, the computation of the recourse requires solving infinite second-stage programming problems. Owing to this requirement, the recourse function cannot be determined analytically, and, hence, the model cannot benefit from the use of techniques of classical mathematical programming. In order to solve the location problems of this nature, we first develop a technique of fuzzy-random simulation to compute the recourse function. The convergence of such simulation scenarios is discussed. In the sequel, we propose a hybrid mutation-based binary ant-colony optimization (MBACO) approach to the two-stage FR-FLMR, which comprises the fuzzy-random simulation and the simplex algorithm. A numerical experiment illustrates the application of the hybrid MBACO algorithm. The comparison shows that the hybrid MBACO finds better solutions than the one using other discrete metaheuristic algorithms, such as binary particle-swarm optimization, genetic algorithm, and tabu search.

  8. An inexact mixed risk-aversion two-stage stochastic programming model for water resources management under uncertainty.

    PubMed

    Li, W; Wang, B; Xie, Y L; Huang, G H; Liu, L

    2015-02-01

    Uncertainties exist in the water resources system, while traditional two-stage stochastic programming is risk-neutral and compares the random variables (e.g., total benefit) to identify the best decisions. To deal with the risk issues, a risk-aversion inexact two-stage stochastic programming model is developed for water resources management under uncertainty. The model was a hybrid methodology of interval-parameter programming, conditional value-at-risk measure, and a general two-stage stochastic programming framework. The method extends on the traditional two-stage stochastic programming method by enabling uncertainties presented as probability density functions and discrete intervals to be effectively incorporated within the optimization framework. It could not only provide information on the benefits of the allocation plan to the decision makers but also measure the extreme expected loss on the second-stage penalty cost. The developed model was applied to a hypothetical case of water resources management. Results showed that that could help managers generate feasible and balanced risk-aversion allocation plans, and analyze the trade-offs between system stability and economy.

  9. Recommendations for choosing an analysis method that controls Type I error for unbalanced cluster sample designs with Gaussian outcomes.

    PubMed

    Johnson, Jacqueline L; Kreidler, Sarah M; Catellier, Diane J; Murray, David M; Muller, Keith E; Glueck, Deborah H

    2015-11-30

    We used theoretical and simulation-based approaches to study Type I error rates for one-stage and two-stage analytic methods for cluster-randomized designs. The one-stage approach uses the observed data as outcomes and accounts for within-cluster correlation using a general linear mixed model. The two-stage model uses the cluster specific means as the outcomes in a general linear univariate model. We demonstrate analytically that both one-stage and two-stage models achieve exact Type I error rates when cluster sizes are equal. With unbalanced data, an exact size α test does not exist, and Type I error inflation may occur. Via simulation, we compare the Type I error rates for four one-stage and six two-stage hypothesis testing approaches for unbalanced data. With unbalanced data, the two-stage model, weighted by the inverse of the estimated theoretical variance of the cluster means, and with variance constrained to be positive, provided the best Type I error control for studies having at least six clusters per arm. The one-stage model with Kenward-Roger degrees of freedom and unconstrained variance performed well for studies having at least 14 clusters per arm. The popular analytic method of using a one-stage model with denominator degrees of freedom appropriate for balanced data performed poorly for small sample sizes and low intracluster correlation. Because small sample sizes and low intracluster correlation are common features of cluster-randomized trials, the Kenward-Roger method is the preferred one-stage approach. Copyright © 2015 John Wiley & Sons, Ltd.

  10. Training and certification in endobronchial ultrasound-guided transbronchial needle aspiration

    PubMed Central

    Konge, Lars; Nayahangan, Leizl Joy; Clementsen, Paul Frost

    2017-01-01

    Endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA) plays a key role in the staging of lung cancer, which is crucial for allocation to surgical treatment. EBUS-TBNA is a complicated procedure and simulation-based training is helpful in the first part of the long learning curve prior to performing the procedure on actual patients. New trainees should follow a structured training programme consisting of training on simulators to proficiency as assessed with a validated test followed by supervised practice on patients. The simulation-based training is superior to the traditional apprenticeship model and is recommended in the newest guidelines. EBUS-TBNA and oesophageal ultrasound-guided fine needle aspiration (EUS-FNA or EUS-B-FNA) are complementary to each other and the combined techniques are superior to either technique alone. It is logical to learn and to perform the two techniques in combination, however, for lung cancer staging solely EBUS-TBNA simulators exist, but hopefully in the future simulation-based training in EUS will be possible. PMID:28840013

  11. Interval sampling methods and measurement error: a computer simulation.

    PubMed

    Wirth, Oliver; Slaven, James; Taylor, Matthew A

    2014-01-01

    A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments. © Society for the Experimental Analysis of Behavior.

  12. Bayesian analyses of time-interval data for environmental radiation monitoring.

    PubMed

    Luo, Peng; Sharp, Julia L; DeVol, Timothy A

    2013-01-01

    Time-interval (time difference between two consecutive pulses) analysis based on the principles of Bayesian inference was investigated for online radiation monitoring. Using experimental and simulated data, Bayesian analysis of time-interval data [Bayesian (ti)] was compared with Bayesian and a conventional frequentist analysis of counts in a fixed count time [Bayesian (cnt) and single interval test (SIT), respectively]. The performances of the three methods were compared in terms of average run length (ARL) and detection probability for several simulated detection scenarios. Experimental data were acquired with a DGF-4C system in list mode. Simulated data were obtained using Monte Carlo techniques to obtain a random sampling of the Poisson distribution. All statistical algorithms were developed using the R Project for statistical computing. Bayesian analysis of time-interval information provided a similar detection probability as Bayesian analysis of count information, but the authors were able to make a decision with fewer pulses at relatively higher radiation levels. In addition, for the cases with very short presence of the source (< count time), time-interval information is more sensitive to detect a change than count information since the source data is averaged by the background data over the entire count time. The relationships of the source time, change points, and modifications to the Bayesian approach for increasing detection probability are presented.

  13. Numerical simulation and analysis of the flow in a two-staged axial fan

    NASA Astrophysics Data System (ADS)

    Xu, J. Q.; Dou, H. S.; Jia, H. X.; Chen, X. P.; Wei, Y. K.; Dong, M. W.

    2016-05-01

    In this paper, numerical simulation was performed for the internal three-dimensional turbulent flow field in the two-stage axial fan using steady three-dimensional in-compressible Navier-Stokes equations coupled with the Realizable turbulent model. The numerical simulation results of the steady analysis were combined with the flow characteristics of two- staged axial fan, the influence of the mutual effect between the blade and the vane on the flow of the two inter-stages was analyzed emphatically. This paper studied how the flow field distribution in inter-stage is influenced by the wake interaction and potential flow interaction of mutual effect in the impeller-vane inter-stage and the vane-impeller inter-stage. The results showed that: Relatively, wake interaction has an advantage over potential flow interaction in the impeller-vane inter-stage; potential flow interaction has an advantage over wake interaction in the vane-impeller inter-stage. In other words, distribution of flow field in the two interstages is determined by the rotating component.

  14. The Impact of Preparation: Conditions for Developing Professional Knowledge through Simulations

    ERIC Educational Resources Information Center

    Sjöberg, David; Karp, Staffan; Söderström, Tor

    2015-01-01

    This article examines simulations of critical incidents in police education by investigating how activities in the preparation phase influence participants' actions and thus the conditions for learning professional knowledge. The study is based on interviews in two stages (traditional and stimulated recall interviews) with six selected students…

  15. A preoperative low cancer antigen 125 level (≤25.8 mg/dl) is a useful criterion to determine the optimal timing of interval debulking surgery following neoadjuvant chemotherapy in epithelial ovarian cancer.

    PubMed

    Morimoto, Akemi; Nagao, Shoji; Kogiku, Ai; Yamamoto, Kasumi; Miwa, Maiko; Wakahashi, Senn; Ichida, Kotaro; Sudo, Tamotsu; Yamaguchi, Satoshi; Fujiwara, Kiyoshi

    2016-06-01

    The purpose of this study is to investigate the clinical characteristics to determine the optimal timing of interval debulking surgery following neoadjuvant chemotherapy in patients with advanced epithelial ovarian cancer. We reviewed the charts of women with advanced epithelial ovarian cancer, fallopian tube cancer or primary peritoneal cancer who underwent interval debulking surgery following neoadjuvant chemotherapy at our cancer center from April 2006 to April 2014. There were 139 patients, including 91 with ovarian cancer [International Federation of Gynecology and Obstetrics (FIGO) Stage IIIc in 56 and IV in 35], two with fallopian tube cancers (FIGO Stage IV, both) and 46 with primary peritoneal cancer (FIGO Stage IIIc in 27 and IV in 19). After 3-6 cycles (median, 4 cycles) of platinum-based chemotherapy, interval debulking surgery was performed. Sixty-seven patients (48.2%) achieved complete resection of all macroscopic disease, while 72 did not. More patients with cancer antigen 125 levels ≤25.8 mg/dl at pre-interval debulking surgery achieved complete resection than those with higher cancer antigen 125 levels (84.7 vs. 21.3%; P< 0.0001). Patients with no ascites at pre-interval debulking surgery also achieved a higher complete resection rate (63.5 vs. 34.1%; P< 0.0001). Moreover, most patients (86.7%) with cancer antigen 125 levels ≤25.8 mg/dl and no ascites at pre-interval debulking surgery achieved complete resection. A low cancer antigen 125 level of ≤25.8 mg/dl and the absence of ascites at pre-interval debulking surgery are major predictive factors for complete resection during interval debulking surgery and present useful criteria to determine the optimal timing of interval debulking surgery. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  16. Pulse shaping system research of CdZnTe radiation detector for high energy x-ray diagnostic

    NASA Astrophysics Data System (ADS)

    Li, Miao; Zhao, Mingkun; Ding, Keyu; Zhou, Shousen; Zhou, Benjie

    2018-02-01

    As one of the typical wide band-gap semiconductor materials, the CdZnTe material has high detection efficiency and excellent energy resolution for the hard X-ray and the Gamma ray. The generated signal of the CdZnTe detector needs to be transformed to the pseudo-Gaussian pulse with a small impulse-width to remove noise and improve the energy resolution by the following nuclear spectrometry data acquisition system. In this paper, the multi-stage pseudo-Gaussian shaping-filter has been investigated based on the nuclear electronic principle. The optimized circuit parameters were also obtained based on the analysis of the characteristics of the pseudo-Gaussian shaping-filter in our following simulations. Based on the simulation results, the falling-time of the output pulse was decreased and faster response time can be obtained with decreasing shaping-time τs-k. And the undershoot was also removed when the ratio of input resistors was set to 1 to 2.5. Moreover, a two stage sallen-key Gaussian shaping-filter was designed and fabricated by using a low-noise voltage feedback operation amplifier LMH6628. A detection experiment platform had been built by using the precise pulse generator CAKE831 as the imitated radiation pulse which was equivalent signal of the semiconductor CdZnTe detector. Experiment results show that the output pulse of the two stage pseudo-Gaussian shaping filter has minimum 200ns pulse width (FWHM), and the output pulse of each stage was well consistent with the simulation results. Based on the performance in our experiment, this multi-stage pseudo-Gaussian shaping-filter can reduce the event-lost caused by pile-up in the CdZnTe semiconductor detector and improve the energy resolution effectively.

  17. Possible scenarios for occurrence of M ~ 7 interplate earthquakes prior to and following the 2011 Tohoku-Oki earthquake based on numerical simulation.

    PubMed

    Nakata, Ryoko; Hori, Takane; Hyodo, Mamoru; Ariyoshi, Keisuke

    2016-05-10

    We show possible scenarios for the occurrence of M ~ 7 interplate earthquakes prior to and following the M ~ 9 earthquake along the Japan Trench, such as the 2011 Tohoku-Oki earthquake. One such M ~ 7 earthquake is so-called the Miyagi-ken-Oki earthquake, for which we conducted numerical simulations of earthquake generation cycles by using realistic three-dimensional (3D) geometry of the subducting Pacific Plate. In a number of scenarios, the time interval between the M ~ 9 earthquake and the subsequent Miyagi-ken-Oki earthquake was equal to or shorter than the average recurrence interval during the later stage of the M ~ 9 earthquake cycle. The scenarios successfully reproduced important characteristics such as the recurrence of M ~ 7 earthquakes, coseismic slip distribution, afterslip distribution, the largest foreshock, and the largest aftershock of the 2011 earthquake. Thus, these results suggest that we should prepare for future M ~ 7 earthquakes in the Miyagi-ken-Oki segment even though this segment recently experienced large coseismic slip in 2011.

  18. Possible scenarios for occurrence of M ~ 7 interplate earthquakes prior to and following the 2011 Tohoku-Oki earthquake based on numerical simulation

    PubMed Central

    Nakata, Ryoko; Hori, Takane; Hyodo, Mamoru; Ariyoshi, Keisuke

    2016-01-01

    We show possible scenarios for the occurrence of M ~ 7 interplate earthquakes prior to and following the M ~ 9 earthquake along the Japan Trench, such as the 2011 Tohoku-Oki earthquake. One such M ~ 7 earthquake is so-called the Miyagi-ken-Oki earthquake, for which we conducted numerical simulations of earthquake generation cycles by using realistic three-dimensional (3D) geometry of the subducting Pacific Plate. In a number of scenarios, the time interval between the M ~ 9 earthquake and the subsequent Miyagi-ken-Oki earthquake was equal to or shorter than the average recurrence interval during the later stage of the M ~ 9 earthquake cycle. The scenarios successfully reproduced important characteristics such as the recurrence of M ~ 7 earthquakes, coseismic slip distribution, afterslip distribution, the largest foreshock, and the largest aftershock of the 2011 earthquake. Thus, these results suggest that we should prepare for future M ~ 7 earthquakes in the Miyagi-ken-Oki segment even though this segment recently experienced large coseismic slip in 2011. PMID:27161897

  19. Effect of Auditory-Perceptual Training With Natural Voice Anchors on Vocal Quality Evaluation.

    PubMed

    Dos Santos, Priscila Campos Martins; Vieira, Maurílio Nunes; Sansão, João Pedro Hallack; Gama, Ana Cristina Côrtes

    2018-01-10

    To analyze the effects of auditory-perceptual training with anchor stimuli of natural voices on inter-rater agreement during the assessment of vocal quality. This is a quantitative nature study. An auditory-perceptual training site was developed consisting of Programming Interface A, an auditory training activity, and Programming Interface B, a control activity. Each interface had three stages: pre-training/pre-interval evaluation, training/interval, and post-training/post-interval evaluation. Two experienced evaluators classified 381 voices according to the GRBASI scale (G-grade, R-roughness, B-breathiness, A-asthenia, S-strain, I-instability). Voices were selected that received the same evaluation by both evaluators: 57 voices for evaluation and 56 for training were selected, with varying degrees of deviation across parameters. Fifteen inexperienced evaluators were then selected. In the pre-, post-training, pre-, and postinterval stages, evaluators listened to the voices and classified them via the GRBASI scale. In the stage interval evaluators read a text. In the stage training each parameter was trained separately. Evaluators analyzed the degrees of deviation of the GRBASI parameters based on anchor stimuli, and could only advance after correctly classifying the voices. To quantify inter-rater agreement and provide statistical analyses, the AC1 coefficient, confidence intervals, and percentage variation of agreement were employed. Except for the asthenia parameter, decreased agreement was observed in the control condition. Improved agreement was observed with auditory training, but this improvement did not achieve statistical significance. Training with natural voice anchors suggest an increased inter-rater agreement during perceptual voice analysis, potentially indicating that new internal references were established. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  20. Estimating length of avian incubation and nestling stages in afrotropical forest birds from interval-censored nest records

    USGS Publications Warehouse

    Stanley, T.R.; Newmark, W.D.

    2010-01-01

    In the East Usambara Mountains in northeast Tanzania, research on the effects of forest fragmentation and disturbance on nest survival in understory birds resulted in the accumulation of 1,002 nest records between 2003 and 2008 for 8 poorly studied species. Because information on the length of the incubation and nestling stages in these species is nonexistent or sparse, our objectives in this study were (1) to estimate the length of the incubation and nestling stage and (2) to compute nest survival using these estimates in combination with calculated daily survival probability. Because our data were interval censored, we developed and applied two new statistical methods to estimate stage length. In the 8 species studied, the incubation stage lasted 9.6-21.8 days and the nestling stage 13.9-21.2 days. Combining these results with estimates of daily survival probability, we found that nest survival ranged from 6.0% to 12.5%. We conclude that our methodology for estimating stage lengths from interval-censored nest records is a reasonable and practical approach in the presence of interval-censored data. ?? 2010 The American Ornithologists' Union.

  1. Spectral Bio-indicator Simulations for Tracking Photosynthetic Activities in a Corn Field

    NASA Technical Reports Server (NTRS)

    Cheng, Yen-Ben; Middleton, Elizabeth M.; Huemmrich, K. Fred; Zhang, Qingyuan; Corp, Lawrence; Campbell, Petya; Kustas, William

    2011-01-01

    Accurate assessment of vegetation canopy optical properties plays a critical role in monitoring natural and managed ecosystems under environmental changes. In this context, radiative transfer (RT) models simulating vegetation canopy reflectance have been demonstrated to be a powerful tool for understanding and estimating spectral bio-indicators. In this study, two narrow band spectroradiometers were utilized to acquire observations over corn canopies for two summers. These in situ spectral data were then used to validate a two-layer Markov chain-based canopy reflectance model for simulating the Photochemical Reflectance Index (PRI), which has been widely used in recent vegetation photosynthetic light use efficiency (LUE) studies. The in situ PRI derived from narrow band hyperspectral reflectance exhibited clear responses to: 1) viewing geometry which affects the asset of light environment; and 2) seasonal variation corresponding to the growth stage. The RT model (ACRM) successfully simulated the responses to the variable viewing geometry. The best simulations were obtained when the model was set to run in the two layer mode using the sunlit leaves as the upper layer and shaded leaves as the lower layer. Simulated PRI values yielded much better correlations to in situ observations when the cornfield was dominated by green foliage during the early growth, vegetative and reproductive stages (r = 0.78 to 0.86) than in the later senescent stage (r = 0.65). Further sensitivity analyses were conducted to show the important influences of leaf area index (LAI) and the sunlit/shaded ratio on PRI observations.

  2. Evaluation of the Inertial Response of Variable-Speed Wind Turbines Using Advanced Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scholbrock, Andrew K; Muljadi, Eduard; Gevorgian, Vahan

    In this paper, we focus on the temporary frequency support effect provided by wind turbine generators (WTGs) through the inertial response. With the implemented inertial control methods, the WTG is capable of increasing its active power output by releasing parts of the stored kinetic energy when the frequency excursion occurs. The active power can be boosted temporarily above the maximum power points, but the rotor speed deceleration follows and an active power output deficiency occurs during the restoration of rotor kinetic energy. We evaluate and compare the inertial response induced by two distinct inertial control methods using advanced simulation. Inmore » the first stage, the proposed inertial control methods are analyzed in offline simulation. Using an advanced wind turbine simulation program, FAST with TurbSim, the response of the researched wind turbine is comprehensively evaluated under turbulent wind conditions, and the impact on the turbine mechanical components are assessed. In the second stage, the inertial control is deployed on a real 600kW wind turbine - Controls Advanced Research Turbine, 3-bladed (CART3), which further verifies the inertial control through a hardware-in-the-loop (HIL) simulation. Various inertial control methods can be effectively evaluated based on the proposed two-stage simulation platform, which combines the offline simulation and real-time HIL simulation. The simulation results also provide insights in designing inertial control for WTGs.« less

  3. Assessing accuracy of point fire intervals across landscapes with simulation modelling

    Treesearch

    Russell A. Parsons; Emily K. Heyerdahl; Robert E. Keane; Brigitte Dorner; Joseph Fall

    2007-01-01

    We assessed accuracy in point fire intervals using a simulation model that sampled four spatially explicit simulated fire histories. These histories varied in fire frequency and size and were simulated on a flat landscape with two forest types (dry versus mesic). We used three sampling designs (random, systematic grids, and stratified). We assessed the sensitivity of...

  4. Application of Adaptive Autopilot Designs for an Unmanned Aerial Vehicle

    NASA Technical Reports Server (NTRS)

    Shin, Yoonghyun; Calise, Anthony J.; Motter, Mark A.

    2005-01-01

    This paper summarizes the application of two adaptive approaches to autopilot design, and presents an evaluation and comparison of the two approaches in simulation for an unmanned aerial vehicle. One approach employs two-stage dynamic inversion and the other employs feedback dynamic inversions based on a command augmentation system. Both are augmented with neural network based adaptive elements. The approaches permit adaptation to both parametric uncertainty and unmodeled dynamics, and incorporate a method that permits adaptation during periods of control saturation. Simulation results for an FQM-117B radio controlled miniature aerial vehicle are presented to illustrate the performance of the neural network based adaptation.

  5. Results of a space shuttle pulme impingement investigation at stage separation in the NASA-MSFC impulse base flow facility

    NASA Technical Reports Server (NTRS)

    Mccanna, R. W.; Sims, W. H.

    1972-01-01

    Results are presented for an experimental space shuttle stage separation plume impingement program conducted in the NASA-Marshall Space Flight Center's impulse base flow facility (IBFF). Major objectives of the investigation were to: (1)determine the degree of dual engine exhaust plume simulation obtained using the equivalent engine; (2) determine the applicability of the analytical techniques; and (3) obtain data applicable for use in full-scale studies. The IBFF tests determined the orbiter rocket motor plume impingement loads, both pressure and heating, on a 3 percent General Dynamics B-15B booster configuration in a quiescent environment simulating a nominal staging altitude of 73.2 km (240,00 ft). The data included plume surveys of two 3 percent scale orbiter nozzles, and a 4.242 percent scaled equivalent nozzle - equivalent in the sense that it was designed to have the same nozzle-throat-to-area ratio as the two 3 percent nozzles and, within the tolerances assigned for machining the hardware, this was accomplished.

  6. Imbalance detection in a manufacturing system: An agent-based model usage

    NASA Astrophysics Data System (ADS)

    Shevchuk, G. K.; Zvereva, O. M.; Medvedev, M. A.

    2017-11-01

    This paper delivers the results of the research work targeted at communications in a manufacturing system. A computer agent-based model which simulates manufacturing system functioning has been engineered. The system lifecycle consists of two recursively repeated stages: a communication stage and a production stage. Model data sets were estimated with the static Leontief's equilibrium equation usage. In experiments relationships between the manufacturing system lifecycle time and conditions of equilibrium violations have been identified. The research results are to be used to propose violation negative influence compensation methods.

  7. Impact of variable river water stage on the simulation of groundwater-river interactions over the Upper Rhine Graben hydrosystem

    NASA Astrophysics Data System (ADS)

    Habets, F.; Vergnes, J.

    2013-12-01

    The Upper Rhine alluvial aquifer is an important transboundary water resource which is particularly vulnerable to pollution from the rivers due to anthropogenic activities. A realistic simulation of the groundwater-river exchanges is therefore of crucial importance for effective management of water resources, and hence is the main topic of the NAPROM project financed by the French Ministry of Ecology. Characterization of these fluxes in term of quantity and spatio-temporal variability depends on the choice made to represent the river water stage in the model. Recently, a couple surface-subsurface model has been applied to the whole aquifer basin. The river stage was first chosen to be constant over the major part of the basin for the computation of the groundwater-river interactions. The present study aims to introduce a variable river water stage to better simulate these interactions and to quantify the impact of this process over the simulated hydrological variables. The general modeling strategy is based on the Eau-Dyssée modeling platform which couples existing specialized models to address water resources and quality in regional scale river basins. In this study, Eau-Dyssée includes the RAPID river routing model and the SAM hydrogeological model. The input data consist in runoff and infiltration coming from a simulation of the ISBA land surface scheme covering the 1986-2003 period. The QtoZ module allows to calculate river stage from simulated river discharges, which is then used to calculate the exchanges between aquifer units and river. Two approaches are compared. The first one uses rating curves derived from observed river discharges and river stages. The second one is based on the Manning's formula. Manning's parameters are defined with geomorphological parametrizations and topographic data based on Digital Elevation Model (DEM). First results show a relatively good agreement between observed and simulated river water height. Taking into account a variable river stage seems to increase the amount of water exchanged between groundwater and river. Systematic biases are nevertheless found between simulated and observed mean river stage elevation. They show that the primary source of errors when simulating river stage - and hence groundwater-river interactions - is the uncertainties associated with the topographic data used to define the riverbed elevation. Thus, this study confirms the need to access to more accurate DEM for estimating riverbed elevation and studying groundwater-river interactions, at least at regional scale.

  8. Correcting bias due to missing stage data in the non-parametric estimation of stage-specific net survival for colorectal cancer using multiple imputation.

    PubMed

    Falcaro, Milena; Carpenter, James R

    2017-06-01

    Population-based net survival by tumour stage at diagnosis is a key measure in cancer surveillance. Unfortunately, data on tumour stage are often missing for a non-negligible proportion of patients and the mechanism giving rise to the missingness is usually anything but completely at random. In this setting, restricting analysis to the subset of complete records gives typically biased results. Multiple imputation is a promising practical approach to the issues raised by the missing data, but its use in conjunction with the Pohar-Perme method for estimating net survival has not been formally evaluated. We performed a resampling study using colorectal cancer population-based registry data to evaluate the ability of multiple imputation, used along with the Pohar-Perme method, to deliver unbiased estimates of stage-specific net survival and recover missing stage information. We created 1000 independent data sets, each containing 5000 patients. Stage data were then made missing at random under two scenarios (30% and 50% missingness). Complete records analysis showed substantial bias and poor confidence interval coverage. Across both scenarios our multiple imputation strategy virtually eliminated the bias and greatly improved confidence interval coverage. In the presence of missing stage data complete records analysis often gives severely biased results. We showed that combining multiple imputation with the Pohar-Perme estimator provides a valid practical approach for the estimation of stage-specific colorectal cancer net survival. As usual, when the percentage of missing data is high the results should be interpreted cautiously and sensitivity analyses are recommended. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. A new model for simulating spring discharge recession and estimating effective porosity of karst aquifers

    NASA Astrophysics Data System (ADS)

    Xu, Bin; Ye, Ming; Dong, Shuning; Dai, Zhenxue; Pei, Yongzhen

    2018-07-01

    Quantitative analysis of recession curves of karst spring hydrographs is a vital tool for understanding karst hydrology and inferring hydraulic properties of karst aquifers. This paper presents a new model for simulating karst spring recession curves. The new model has the following characteristics: (1) the model considers two separate but hydraulically connected reservoirs: matrix reservoir and conduit reservoir; (2) the model separates karst spring hydrograph recession into three stages: conduit-drainage stage, mixed-drainage stage (with both conduit drainage and matrix drainage), and matrix-drainage stage; and (3) in the mixed-drainage stage, the model uses multiple conduit layers to present different levels of conduit development. The new model outperforms the classical Mangin model and the recently developed Fiorillo model for simulating observed discharge at the Madison Blue Spring located in northern Florida. This is attributed to the latter two characteristics of the new model. Based on the new model, a method is developed for estimating effective porosity of the matrix and conduit reservoirs for the three drainage stages. The estimated porosity values are consistent with measured matrix porosity at the study site and with estimated conduit porosity reported in literature. The new model for simulating karst spring hydrograph recession is mathematically general, and can be applied to a wide range of karst spring hydrographs to understand groundwater flow in karst aquifers. The limitations of the model are discussed at the end of this paper.

  10. An efficient two-stage approach for image-based FSI analysis of atherosclerotic arteries

    PubMed Central

    Rayz, Vitaliy L.; Mofrad, Mohammad R. K.; Saloner, David

    2010-01-01

    Patient-specific biomechanical modeling of atherosclerotic arteries has the potential to aid clinicians in characterizing lesions and determining optimal treatment plans. To attain high levels of accuracy, recent models use medical imaging data to determine plaque component boundaries in three dimensions, and fluid–structure interaction is used to capture mechanical loading of the diseased vessel. As the plaque components and vessel wall are often highly complex in shape, constructing a suitable structured computational mesh is very challenging and can require a great deal of time. Models based on unstructured computational meshes require relatively less time to construct and are capable of accurately representing plaque components in three dimensions. These models unfortunately require additional computational resources and computing time for accurate and meaningful results. A two-stage modeling strategy based on unstructured computational meshes is proposed to achieve a reasonable balance between meshing difficulty and computational resource and time demand. In this method, a coarsegrained simulation of the full arterial domain is used to guide and constrain a fine-scale simulation of a smaller region of interest within the full domain. Results for a patient-specific carotid bifurcation model demonstrate that the two-stage approach can afford a large savings in both time for mesh generation and time and resources needed for computation. The effects of solid and fluid domain truncation were explored, and were shown to minimally affect accuracy of the stress fields predicted with the two-stage approach. PMID:19756798

  11. Health care planning and education via gaming-simulation: a two-stage experiment.

    PubMed

    Gagnon, J H; Greenblat, C S

    1977-01-01

    A two-stage process of gaming-simulation design was conducted: the first stage of design concerned national planning for hemophilia care; the second stage of design was for gaming-simulation concerning the problems of hemophilia patients and health care providers. The planning design was intended to be adaptable to large-scale planning for a variety of health care problems. The educational game was designed using data developed in designing the planning game. A broad range of policy-makers participated in the planning game.

  12. The Cost-Effectiveness of Low-Cost Essential Antihypertensive Medicines for Hypertension Control in China: A Modelling Study.

    PubMed

    Gu, Dongfeng; He, Jiang; Coxson, Pamela G; Rasmussen, Petra W; Huang, Chen; Thanataveerat, Anusorn; Tzong, Keane Y; Xiong, Juyang; Wang, Miao; Zhao, Dong; Goldman, Lee; Moran, Andrew E

    2015-08-01

    Hypertension is China's leading cardiovascular disease risk factor. Improved hypertension control in China would result in result in enormous health gains in the world's largest population. A computer simulation model projected the cost-effectiveness of hypertension treatment in Chinese adults, assuming a range of essential medicines list drug costs. The Cardiovascular Disease Policy Model-China, a Markov-style computer simulation model, simulated hypertension screening, essential medicines program implementation, hypertension control program administration, drug treatment and monitoring costs, disease-related costs, and quality-adjusted life years (QALYs) gained by preventing cardiovascular disease or lost because of drug side effects in untreated hypertensive adults aged 35-84 y over 2015-2025. Cost-effectiveness was assessed in cardiovascular disease patients (secondary prevention) and for two blood pressure ranges in primary prevention (stage one, 140-159/90-99 mm Hg; stage two, ≥160/≥100 mm Hg). Treatment of isolated systolic hypertension and combined systolic and diastolic hypertension were modeled as a reduction in systolic blood pressure; treatment of isolated diastolic hypertension was modeled as a reduction in diastolic blood pressure. One-way and probabilistic sensitivity analyses explored ranges of antihypertensive drug effectiveness and costs, monitoring frequency, medication adherence, side effect severity, background hypertension prevalence, antihypertensive medication treatment, case fatality, incidence and prevalence, and cardiovascular disease treatment costs. Median antihypertensive costs from Shanghai and Yunnan province were entered into the model in order to estimate the effects of very low and high drug prices. Incremental cost-effectiveness ratios less than the per capita gross domestic product of China (11,900 international dollars [Int$] in 2015) were considered cost-effective. Treating hypertensive adults with prior cardiovascular disease for secondary prevention was projected to be cost saving in the main simulation and 100% of probabilistic simulation results. Treating all hypertension for primary and secondary prevention would prevent about 800,000 cardiovascular disease events annually (95% uncertainty interval, 0.6 to 1.0 million) and was borderline cost-effective incremental to treating only cardiovascular disease and stage two patients (2015 Int$13,000 per QALY gained [95% uncertainty interval, Int$10,000 to Int$18,000]). Of all one-way sensitivity analyses, assuming adherence to taking medications as low as 25%, high Shanghai drug costs, or low medication efficacy led to the most unfavorable results (treating all hypertension, about Int$47,000, Int$37,000, and Int$27,000 per QALY were gained, respectively). The strengths of this study were the use of a recent Chinese national health survey, vital statistics, health care costs, and cohort study outcomes data as model inputs and reliance on clinical-trial-based estimates of coronary heart disease and stroke risk reduction due to antihypertensive medication treatment. The limitations of the study were the use of several sources of data, limited clinical trial evidence for medication effectiveness and harms in the youngest and oldest age groups, lack of information about geographic and ethnic subgroups, lack of specific information about indirect costs borne by patients, and uncertainty about the future epidemiology of cardiovascular diseases in China. Expanded hypertension treatment has the potential to prevent about 800,000 cardiovascular disease events annually and be borderline cost-effective in China, provided low-cost essential antihypertensive medicines programs can be implemented.

  13. Boosting flood warning schemes with fast emulator of detailed hydrodynamic models

    NASA Astrophysics Data System (ADS)

    Bellos, V.; Carbajal, J. P.; Leitao, J. P.

    2017-12-01

    Floods are among the most destructive catastrophic events and their frequency has incremented over the last decades. To reduce flood impact and risks, flood warning schemes are installed in flood prone areas. Frequently, these schemes are based on numerical models which quickly provide predictions of water levels and other relevant observables. However, the high complexity of flood wave propagation in the real world and the need of accurate predictions in urban environments or in floodplains hinders the use of detailed simulators. This sets the difficulty, we need fast predictions that meet the accuracy requirements. Most physics based detailed simulators although accurate, will not fulfill the speed demand. Even if High Performance Computing techniques are used (the magnitude of required simulation time is minutes/hours). As a consequence, most flood warning schemes are based in coarse ad-hoc approximations that cannot take advantage a detailed hydrodynamic simulation. In this work, we present a methodology for developing a flood warning scheme using an Gaussian Processes based emulator of a detailed hydrodynamic model. The methodology consists of two main stages: 1) offline stage to build the emulator; 2) online stage using the emulator to predict and generate warnings. The offline stage consists of the following steps: a) definition of the critical sites of the area under study, and the specification of the observables to predict at those sites, e.g. water depth, flow velocity, etc.; b) generation of a detailed simulation dataset to train the emulator; c) calibration of the required parameters (if measurements are available). The online stage is carried on using the emulator to predict the relevant observables quickly, and the detailed simulator is used in parallel to verify key predictions of the emulator. The speed gain given by the emulator allows also to quantify uncertainty in predictions using ensemble methods. The above methodology is applied in real world scenario.

  14. Energy-Saving Control of a Novel Hydraulic Drive System for Field Walking Robot

    NASA Astrophysics Data System (ADS)

    Fang, Delei; Shang, Jianzhong; Xue, Yong; Yang, Junhong; Wang, Zhuo

    2018-01-01

    To improve the efficiency of the hydraulic drive system in field walking robot, this paper proposed a novel hydraulic system based on two-stage pressure source. Based on the analysis of low efficiency of robot single-stage hydraulic system, the paper firstly introduces the concept and design of two-stage pressure source drive system. Then, the new hydraulic system energy-saving control is planned according to the characteristics of walking robot. The feasibility of the new hydraulic system is proved by the simulation of the walking robot squatting. Finally, the efficiencies of two types hydraulic system are calculated, indicating that the novel hydraulic system can increase the efficiency by 41.5%, which can contribute to enhance knowledge about hydraulic drive system for field walking robot.

  15. Structure design of and experimental research on a two-stage laval foam breaker for foam fluid recycling.

    PubMed

    Wang, Jin-song; Cao, Pin-lu; Yin, Kun

    2015-07-01

    Environmental, economical and efficient antifoaming technology is the basis for achievement of foam drilling fluid recycling. The present study designed a novel two-stage laval mechanical foam breaker that primarily uses vacuum generated by Coanda effect and Laval principle to break foam. Numerical simulation results showed that the value and distribution of negative pressure of two-stage laval foam breaker were larger than that of the normal foam breaker. Experimental results showed that foam-breaking efficiency of two-stage laval foam breaker was higher than that of normal foam breaker, when gas-to-liquid ratio and liquid flow rate changed. The foam-breaking efficiency of normal foam breaker decreased rapidly with increasing foam stability, whereas the two-stage laval foam breaker remained unchanged. Foam base fluid would be recycled using two-stage laval foam breaker, which would reduce the foam drilling cost sharply and waste disposals that adverse by affect the environment.

  16. Temporal Downscaling of Crop Coefficient and Crop Water Requirement from Growing Stage to Substage Scales

    PubMed Central

    Shang, Songhao

    2012-01-01

    Crop water requirement is essential for agricultural water management, which is usually available for crop growing stages. However, crop water requirement values of monthly or weekly scales are more useful for water management. A method was proposed to downscale crop coefficient and water requirement from growing stage to substage scales, which is based on the interpolation of accumulated crop and reference evapotranspiration calculated from their values in growing stages. The proposed method was compared with two straightforward methods, that is, direct interpolation of crop evapotranspiration and crop coefficient by assuming that stage average values occurred in the middle of the stage. These methods were tested with a simulated daily crop evapotranspiration series. Results indicate that the proposed method is more reliable, showing that the downscaled crop evapotranspiration series is very close to the simulated ones. PMID:22619572

  17. On the Formation of Meridional Overturning Circulation in the Pacific Ocean during the MIS31 Interglacial

    NASA Astrophysics Data System (ADS)

    Justino, F. J.; Lindemann, D.; Kucharski, F.

    2016-02-01

    Earth climate history has been punctuated by cold (glacial) and warm (inter-glacial) intervals associated with modification of the planetary orbit and subsequently changes in paleotopography.During the Pleistocene epoch, the time interval between 1.8 million and 11,700 before present, remarkable episodes of warmer climates such as the Marine IsotopeStage (MIS) 1, 5e, 11c, and 31 which occurred at 9, 127, 409, and 1080 ka, lead to changes in air temperature in the polar regions and substantial melting of polar glaciers. Based on first ever multi-millennium coupled climate simulations of the Marine Isotope Stage 31 (MIS31), long-term oceanic conditions characteristic of this interval have been analyzed. Modeling experiments forced by modified West Antarctic Ice Sheet (WAIS) topography and astronomical configuration, demonstrated that substantial increase in the thermohaline flow and its associated northward heat transport in both Atlantic and Pacific oceans are predicted to occur during the MIS31. In the Atlantic these changes are driven by enhanced oceanic heat loss and increased water density. In the Pacific, anomalous atmospheric circulation leads to an overall increase of the water mass transport in the subtropical gyre, and drastically modified subtropical cell.Additional aspects related to the formation of the Pacific ocean MOC will be presented. This study is sponsored by the Brazilian Antarctic Program Grant CNPq 407681/2013-2.

  18. How to regress and predict in a Bland-Altman plot? Review and contribution based on tolerance intervals and correlated-errors-in-variables models.

    PubMed

    Francq, Bernard G; Govaerts, Bernadette

    2016-06-30

    Two main methodologies for assessing equivalence in method-comparison studies are presented separately in the literature. The first one is the well-known and widely applied Bland-Altman approach with its agreement intervals, where two methods are considered interchangeable if their differences are not clinically significant. The second approach is based on errors-in-variables regression in a classical (X,Y) plot and focuses on confidence intervals, whereby two methods are considered equivalent when providing similar measures notwithstanding the random measurement errors. This paper reconciles these two methodologies and shows their similarities and differences using both real data and simulations. A new consistent correlated-errors-in-variables regression is introduced as the errors are shown to be correlated in the Bland-Altman plot. Indeed, the coverage probabilities collapse and the biases soar when this correlation is ignored. Novel tolerance intervals are compared with agreement intervals with or without replicated data, and novel predictive intervals are introduced to predict a single measure in an (X,Y) plot or in a Bland-Atman plot with excellent coverage probabilities. We conclude that the (correlated)-errors-in-variables regressions should not be avoided in method comparison studies, although the Bland-Altman approach is usually applied to avert their complexity. We argue that tolerance or predictive intervals are better alternatives than agreement intervals, and we provide guidelines for practitioners regarding method comparison studies. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  19. Empirical likelihood-based confidence intervals for the sensitivity of a continuous-scale diagnostic test at a fixed level of specificity.

    PubMed

    Gengsheng Qin; Davis, Angela E; Jing, Bing-Yi

    2011-06-01

    For a continuous-scale diagnostic test, it is often of interest to find the range of the sensitivity of the test at the cut-off that yields a desired specificity. In this article, we first define a profile empirical likelihood ratio for the sensitivity of a continuous-scale diagnostic test and show that its limiting distribution is a scaled chi-square distribution. We then propose two new empirical likelihood-based confidence intervals for the sensitivity of the test at a fixed level of specificity by using the scaled chi-square distribution. Simulation studies are conducted to compare the finite sample performance of the newly proposed intervals with the existing intervals for the sensitivity in terms of coverage probability. A real example is used to illustrate the application of the recommended methods.

  20. Assessing Interval Estimation Methods for Hill Model ...

    EPA Pesticide Factsheets

    The Hill model of concentration-response is ubiquitous in toxicology, perhaps because its parameters directly relate to biologically significant metrics of toxicity such as efficacy and potency. Point estimates of these parameters obtained through least squares regression or maximum likelihood are commonly used in high-throughput risk assessment, but such estimates typically fail to include reliable information concerning confidence in (or precision of) the estimates. To address this issue, we examined methods for assessing uncertainty in Hill model parameter estimates derived from concentration-response data. In particular, using a sample of ToxCast concentration-response data sets, we applied four methods for obtaining interval estimates that are based on asymptotic theory, bootstrapping (two varieties), and Bayesian parameter estimation, and then compared the results. These interval estimation methods generally did not agree, so we devised a simulation study to assess their relative performance. We generated simulated data by constructing four statistical error models capable of producing concentration-response data sets comparable to those observed in ToxCast. We then applied the four interval estimation methods to the simulated data and compared the actual coverage of the interval estimates to the nominal coverage (e.g., 95%) in order to quantify performance of each of the methods in a variety of cases (i.e., different values of the true Hill model paramet

  1. Study of CFB Simulation Model with Coincidence at Multi-Working Condition

    NASA Astrophysics Data System (ADS)

    Wang, Z.; He, F.; Yang, Z. W.; Li, Z.; Ni, W. D.

    A circulating fluidized bed (CFB) two-stage simulation model was developed. To realize the model results coincident with the design value or real operation value at specified multi-working conditions and with capability of real-time calculation, only the main key processes were taken into account and the dominant factors were further abstracted out of these key processes. The simulation results showed a sound accordance at multi-working conditions, and confirmed the advantage of the two-stage model over the original single-stage simulation model. The combustion-support effect of secondary air was investigated using the two-stage model. This model provides a solid platform for investigating the pant-leg structured CFB furnace, which is now under design for a supercritical power plant.

  2. Numerical simulation analysis of four-stage mutation of solid-liquid two-phase grinding

    NASA Astrophysics Data System (ADS)

    Li, Junye; Liu, Yang; Hou, Jikun; Hu, Jinglei; Zhang, Hengfu; Wu, Guiling

    2018-03-01

    In order to explore the numerical simulation of solid-liquid two-phase abrasive grain polishing and abrupt change tube, in this paper, the fourth order abrupt change tube was selected as the research object, using the fluid mechanics software to simulate,based on the theory of solid-liquid two-phase flow dynamics, study on the mechanism of AFM micromachining a workpiece during polishing.Analysis at different inlet pressures, the dynamic pressure distribution pipe mutant fourth order abrasive flow field, turbulence intensity, discuss the influence of the inlet pressure of different abrasive flow polishing effect.

  3. Dynamic detection-rate-based bit allocation with genuine interval concealment for binary biometric representation.

    PubMed

    Lim, Meng-Hui; Teoh, Andrew Beng Jin; Toh, Kar-Ann

    2013-06-01

    Biometric discretization is a key component in biometric cryptographic key generation. It converts an extracted biometric feature vector into a binary string via typical steps such as segmentation of each feature element into a number of labeled intervals, mapping of each interval-captured feature element onto a binary space, and concatenation of the resulted binary output of all feature elements into a binary string. Currently, the detection rate optimized bit allocation (DROBA) scheme is one of the most effective biometric discretization schemes in terms of its capability to assign binary bits dynamically to user-specific features with respect to their discriminability. However, we learn that DROBA suffers from potential discriminative feature misdetection and underdiscretization in its bit allocation process. This paper highlights such drawbacks and improves upon DROBA based on a novel two-stage algorithm: 1) a dynamic search method to efficiently recapture such misdetected features and to optimize the bit allocation of underdiscretized features and 2) a genuine interval concealment technique to alleviate crucial information leakage resulted from the dynamic search. Improvements in classification accuracy on two popular face data sets vindicate the feasibility of our approach compared with DROBA.

  4. Constraint Force Equation Methodology for Modeling Multi-Body Stage Separation Dynamics

    NASA Technical Reports Server (NTRS)

    Toniolo, Matthew D.; Tartabini, Paul V.; Pamadi, Bandu N.; Hotchko, Nathaniel

    2008-01-01

    This paper discusses a generalized approach to the multi-body separation problems in a launch vehicle staging environment based on constraint force methodology and its implementation into the Program to Optimize Simulated Trajectories II (POST2), a widely used trajectory design and optimization tool. This development facilitates the inclusion of stage separation analysis into POST2 for seamless end-to-end simulations of launch vehicle trajectories, thus simplifying the overall implementation and providing a range of modeling and optimization capabilities that are standard features in POST2. Analysis and results are presented for two test cases that validate the constraint force equation methodology in a stand-alone mode and its implementation in POST2.

  5. Numeric pathologic lymph node classification shows prognostic superiority to topographic pN classification in esophageal squamous cell carcinoma.

    PubMed

    Sugawara, Kotaro; Yamashita, Hiroharu; Uemura, Yukari; Mitsui, Takashi; Yagi, Koichi; Nishida, Masato; Aikou, Susumu; Mori, Kazuhiko; Nomura, Sachiyo; Seto, Yasuyuki

    2017-10-01

    The current eighth tumor node metastasis lymph node category pathologic lymph node staging system for esophageal squamous cell carcinoma is based solely on the number of metastatic nodes and does not consider anatomic distribution. We aimed to assess the prognostic capability of the eighth tumor node metastasis pathologic lymph node staging system (numeric-based) compared with the 11th Japan Esophageal Society (topography-based) pathologic lymph node staging system in patients with esophageal squamous cell carcinoma. We retrospectively reviewed the clinical records of 289 patients with esophageal squamous cell carcinoma who underwent esophagectomy with extended lymph node dissection during the period from January 2006 through June 2016. We compared discrimination abilities for overall survival, recurrence-free survival, and cancer-specific survival between these 2 staging systems using C-statistics. The median number of dissected and metastatic nodes was 61 (25% to 75% quartile range, 45 to 79) and 1 (25% to 75% quartile range, 0 to 3), respectively. The eighth tumor node metastasis pathologic lymph node staging system had a greater ability to accurately determine overall survival (C-statistics: tumor node metastasis classification, 0.69, 95% confidence interval, 0.62-0.76; Japan Esophageal Society classification; 0.65, 95% confidence interval, 0.58-0.71; P = .014) and cancer-specific survival (C-statistics: tumor node metastasis classification, 0.78, 95% confidence interval, 0.70-0.87; Japan Esophageal Society classification; 0.72, 95% confidence interval, 0.64-0.80; P = .018). Rates of total recurrence rose as the eighth tumor node metastasis pathologic lymph node stage increased, while stratification of patients according to the topography-based node classification system was not feasible. Numeric nodal staging is an essential tool for stratifying the oncologic outcomes of patients with esophageal squamous cell carcinoma even in the cohort in which adequate numbers of lymph nodes were harvested. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Simulating anchovy's full life cycle in the northern Aegean Sea (eastern Mediterranean): A coupled hydro-biogeochemical-IBM model

    NASA Astrophysics Data System (ADS)

    Politikos, D.; Somarakis, S.; Tsiaras, K. P.; Giannoulaki, M.; Petihakis, G.; Machias, A.; Triantafyllou, G.

    2015-11-01

    A 3-D full life cycle population model for the North Aegean Sea (NAS) anchovy stock is presented. The model is two-way coupled with a hydrodynamic-biogeochemical model (POM-ERSEM). The anchovy life span is divided into seven life stages/age classes. Embryos and early larvae are passive particles, but subsequent stages exhibit active horizontal movements based on specific rules. A bioenergetics model simulates the growth in both the larval and juvenile/adult stages, while the microzooplankton and mesozooplankton fields of the biogeochemical model provide the food for fish consumption. The super-individual approach is adopted for the representation of the anchovy population. A dynamic egg production module, with an energy allocation algorithm, is embedded in the bioenergetics equation and produces eggs based on a new conceptual model for anchovy vitellogenesis. A model simulation for the period 2003-2006 with realistic initial conditions reproduced well the magnitude of population biomass and daily egg production estimated from acoustic and daily egg production method (DEPM) surveys, carried out in the NAS during June 2003-2006. Model simulated adult and egg habitats were also in good agreement with observed spatial distributions of acoustic biomass and egg abundance in June. Sensitivity simulations were performed to investigate the effect of different formulations adopted for key processes, such as reproduction and movement. The effect of the anchovy population on plankton dynamics was also investigated, by comparing simulations adopting a two-way or a one-way coupling of the fish with the biogeochemical model.

  7. A Two-Step Bayesian Approach for Propensity Score Analysis: Simulations and Case Study.

    PubMed

    Kaplan, David; Chen, Jianshen

    2012-07-01

    A two-step Bayesian propensity score approach is introduced that incorporates prior information in the propensity score equation and outcome equation without the problems associated with simultaneous Bayesian propensity score approaches. The corresponding variance estimators are also provided. The two-step Bayesian propensity score is provided for three methods of implementation: propensity score stratification, weighting, and optimal full matching. Three simulation studies and one case study are presented to elaborate the proposed two-step Bayesian propensity score approach. Results of the simulation studies reveal that greater precision in the propensity score equation yields better recovery of the frequentist-based treatment effect. A slight advantage is shown for the Bayesian approach in small samples. Results also reveal that greater precision around the wrong treatment effect can lead to seriously distorted results. However, greater precision around the correct treatment effect parameter yields quite good results, with slight improvement seen with greater precision in the propensity score equation. A comparison of coverage rates for the conventional frequentist approach and proposed Bayesian approach is also provided. The case study reveals that credible intervals are wider than frequentist confidence intervals when priors are non-informative.

  8. Status of simulation in health care education: an international survey.

    PubMed

    Qayumi, Karim; Pachev, George; Zheng, Bin; Ziv, Amitai; Koval, Valentyna; Badiei, Sadia; Cheng, Adam

    2014-01-01

    Simulation is rapidly penetrating the terrain of health care education and has gained growing acceptance as an educational method and patient safety tool. Despite this, the state of simulation in health care education has not yet been evaluated on a global scale. In this project, we studied the global status of simulation in health care education by determining the degree of financial support, infrastructure, manpower, information technology capabilities, engagement of groups of learners, and research and scholarly activities, as well as the barriers, strengths, opportunities for growth, and other aspects of simulation in health care education. We utilized a two-stage process, including an online survey and a site visit that included interviews and debriefings. Forty-two simulation centers worldwide participated in this study, the results of which show that despite enormous interest and enthusiasm in the health care community, use of simulation in health care education is limited to specific areas and is not a budgeted item in many institutions. Absence of a sustainable business model, as well as sufficient financial support in terms of budget, infrastructure, manpower, research, and scholarly activities, slows down the movement of simulation. Specific recommendations are made based on current findings to support simulation in the next developmental stages.

  9. Availability analysis of mechanical systems with condition-based maintenance using semi-Markov and evaluation of optimal condition monitoring interval

    NASA Astrophysics Data System (ADS)

    Kumar, Girish; Jain, Vipul; Gandhi, O. P.

    2018-03-01

    Maintenance helps to extend equipment life by improving its condition and avoiding catastrophic failures. Appropriate model or mechanism is, thus, needed to quantify system availability vis-a-vis a given maintenance strategy, which will assist in decision-making for optimal utilization of maintenance resources. This paper deals with semi-Markov process (SMP) modeling for steady state availability analysis of mechanical systems that follow condition-based maintenance (CBM) and evaluation of optimal condition monitoring interval. The developed SMP model is solved using two-stage analytical approach for steady-state availability analysis of the system. Also, CBM interval is decided for maximizing system availability using Genetic Algorithm approach. The main contribution of the paper is in the form of a predictive tool for system availability that will help in deciding the optimum CBM policy. The proposed methodology is demonstrated for a centrifugal pump.

  10. Increased oil production and reserves from improved completion techniques in the Bluebell field, Uinta Basin. Quarterly technical report, October 1, 1996--December 31, 1996

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morgan, C.D.

    1997-02-01

    The objective of this project is to increase oil production and reserves in the Uinta Basin by demonstrating improved completion techniques. Low productivity of Uinta Basin wells is caused by gross production intervals of several thousand feet that contain perforated thief zones, water-bearing zones, and unperforated oil-bearing intervals. Geologic and engineering characterization and computer simulation of the Green River and Wasatch formations in the Bluebell field will determine reservoir heterogeneities related to fractures and depositional trends. This will be followed by drilling and recompletion of several wells to demonstrate improved completion techniques based on the reservoir characterization. Transfer of themore » project results will be an ongoing component of the project. The recompletion of the Michelle Ute 7-1 well commenced and is the first step in the three-well demonstration. As part of the recompletion, the gross productive interval was logged, additional beds were perforated, and the entire interval was stimulated with a three-stage acid treatment. The operator attempted to stimulate the well at high pressure (about 10,000 pounds per square inch (psi) [68,950 kPa]) at three separate packer locations. But at each location the pressure would not hold. As a result, all three stages were pumped at a lower pressure (6500 psi maximum [44,820 kPa]) from one packer location. As of December 31, 1996, the operator was tripping in the hole with the production packer and tubing to begin swab testing the well.« less

  11. Evaluation of the Inertial Response of Variable-Speed Wind Turbines Using Advanced Simulation: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scholbrock, Andrew K; Muljadi, Eduard; Gevorgian, Vahan

    In this paper, we focus on the temporary frequency support effect provided by wind turbine generators (WTGs) through the inertial response. With the implemented inertial control methods, the WTG is capable of increasing its active power output by releasing parts of the stored kinetic energy when the frequency excursion occurs. The active power can be boosted temporarily above the maximum power points, but the rotor speed deceleration follows and an active power output deficiency occurs during the restoration of rotor kinetic energy. In this paper, we evaluate and compare the inertial response induced by two distinct inertial control methods usingmore » advanced simulation. In the first stage, the proposed inertial control methods are analyzed in offline simulation. Using an advanced wind turbine simulation program, FAST with TurbSim, the response of the researched wind turbine is comprehensively evaluated under turbulent wind conditions, and the impact on the turbine mechanical components are assessed. In the second stage, the inertial control is deployed on a real 600-kW wind turbine, the three-bladed Controls Advanced Research Turbine, which further verifies the inertial control through a hardware-in-the-loop simulation. Various inertial control methods can be effectively evaluated based on the proposed two-stage simulation platform, which combines the offline simulation and real-time hardware-in-the-loop simulation. The simulation results also provide insights in designing inertial control for WTGs.« less

  12. One-stage versus two-stage exchange arthroplasty for infected total knee arthroplasty: a systematic review.

    PubMed

    Nagra, Navraj S; Hamilton, Thomas W; Ganatra, Sameer; Murray, David W; Pandit, Hemant

    2016-10-01

    Infection complicating total knee arthroplasty (TKA) has serious implications. Traditionally the debate on whether one- or two-stage exchange arthroplasty is the optimum management of infected TKA has favoured two-stage procedures; however, a paradigm shift in opinion is emerging. This study aimed to establish whether current evidence supports one-stage revision for managing infected TKA based on reinfection rates and functional outcomes post-surgery. MEDLINE/PubMed and CENTRAL databases were reviewed for studies that compared one- and two-stage exchange arthroplasty TKA in more than ten patients with a minimum 2-year follow-up. From an initial sample of 796, five cohort studies with a total of 231 patients (46 single-stage/185 two-stage; median patient age 66 years, range 61-71 years) met inclusion criteria. Overall, there were no significant differences in risk of reinfection following one- or two-stage exchange arthroplasty (OR -0.06, 95 % confidence interval -0.13, 0.01). Subgroup analysis revealed that in studies published since 2000, one-stage procedures have a significantly lower reinfection rate. One study investigated functional outcomes and reported that one-stage surgery was associated with superior functional outcomes. Scarcity of data, inconsistent study designs, surgical technique and antibiotic regime disparities limit recommendations that can be made. Recent studies suggest one-stage exchange arthroplasty may provide superior outcomes, including lower reinfection rates and superior function, in select patients. Clinically, for some patients, one-stage exchange arthroplasty may represent optimum treatment; however, patient selection criteria and key components of surgical and post-operative anti-microbial management remain to be defined. III.

  13. A modified varying-stage adaptive phase II/III clinical trial design.

    PubMed

    Dong, Gaohong; Vandemeulebroecke, Marc

    2016-07-01

    Conventionally, adaptive phase II/III clinical trials are carried out with a strict two-stage design. Recently, a varying-stage adaptive phase II/III clinical trial design has been developed. In this design, following the first stage, an intermediate stage can be adaptively added to obtain more data, so that a more informative decision can be made. Therefore, the number of further investigational stages is determined based upon data accumulated to the interim analysis. This design considers two plausible study endpoints, with one of them initially designated as the primary endpoint. Based on interim results, another endpoint can be switched as the primary endpoint. However, in many therapeutic areas, the primary study endpoint is well established. Therefore, we modify this design to consider one study endpoint only so that it may be more readily applicable in real clinical trial designs. Our simulations show that, the same as the original design, this modified design controls the Type I error rate, and the design parameters such as the threshold probability for the two-stage setting and the alpha allocation ratio in the two-stage setting versus the three-stage setting have a great impact on the design characteristics. However, this modified design requires a larger sample size for the initial stage, and the probability of futility becomes much higher when the threshold probability for the two-stage setting gets smaller. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  14. The Inverse Problem for Confined Aquifer Flow: Identification and Estimation With Extensions

    NASA Astrophysics Data System (ADS)

    Loaiciga, Hugo A.; MariñO, Miguel A.

    1987-01-01

    The contributions of this work are twofold. First, a methodology for estimating the elements of parameter matrices in the governing equation of flow in a confined aquifer is developed. The estimation techniques for the distributed-parameter inverse problem pertain to linear least squares and generalized least squares methods. The linear relationship among the known heads and unknown parameters of the flow equation provides the background for developing criteria for determining the identifiability status of unknown parameters. Under conditions of exact or overidentification it is possible to develop statistically consistent parameter estimators and their asymptotic distributions. The estimation techniques, namely, two-stage least squares and three stage least squares, are applied to a specific groundwater inverse problem and compared between themselves and with an ordinary least squares estimator. The three-stage estimator provides the closer approximation to the actual parameter values, but it also shows relatively large standard errors as compared to the ordinary and two-stage estimators. The estimation techniques provide the parameter matrices required to simulate the unsteady groundwater flow equation. Second, a nonlinear maximum likelihood estimation approach to the inverse problem is presented. The statistical properties of maximum likelihood estimators are derived, and a procedure to construct confidence intervals and do hypothesis testing is given. The relative merits of the linear and maximum likelihood estimators are analyzed. Other topics relevant to the identification and estimation methodologies, i.e., a continuous-time solution to the flow equation, coping with noise-corrupted head measurements, and extension of the developed theory to nonlinear cases are also discussed. A simulation study is used to evaluate the methods developed in this study.

  15. Observer-Pattern Modeling and Slow-Scale Bifurcation Analysis of Two-Stage Boost Inverters

    NASA Astrophysics Data System (ADS)

    Zhang, Hao; Wan, Xiaojin; Li, Weijie; Ding, Honghui; Yi, Chuanzhi

    2017-06-01

    This paper deals with modeling and bifurcation analysis of two-stage Boost inverters. Since the effect of the nonlinear interactions between source-stage converter and load-stage inverter causes the “hidden” second-harmonic current at the input of the downstream H-bridge inverter, an observer-pattern modeling method is proposed by removing time variance originating from both fundamental frequency and hidden second harmonics in the derived averaged equations. Based on the proposed observer-pattern model, the underlying mechanism of slow-scale instability behavior is uncovered with the help of eigenvalue analysis method. Then eigenvalue sensitivity analysis is used to select some key system parameters of two-stage Boost inverter, and some behavior boundaries are given to provide some design-oriented information for optimizing the circuit. Finally, these theoretical results are verified by numerical simulations and circuit experiment.

  16. An adaptive modeling and simulation environment for combined-cycle data reconciliation and degradation estimation

    NASA Astrophysics Data System (ADS)

    Lin, Tsungpo

    Performance engineers face the major challenge in modeling and simulation for the after-market power system due to system degradation and measurement errors. Currently, the majority in power generation industries utilizes the deterministic data matching method to calibrate the model and cascade system degradation, which causes significant calibration uncertainty and also the risk of providing performance guarantees. In this research work, a maximum-likelihood based simultaneous data reconciliation and model calibration (SDRMC) is used for power system modeling and simulation. By replacing the current deterministic data matching with SDRMC one can reduce the calibration uncertainty and mitigate the error propagation to the performance simulation. A modeling and simulation environment for a complex power system with certain degradation has been developed. In this environment multiple data sets are imported when carrying out simultaneous data reconciliation and model calibration. Calibration uncertainties are estimated through error analyses and populated to performance simulation by using principle of error propagation. System degradation is then quantified by performance comparison between the calibrated model and its expected new & clean status. To mitigate smearing effects caused by gross errors, gross error detection (GED) is carried out in two stages. The first stage is a screening stage, in which serious gross errors are eliminated in advance. The GED techniques used in the screening stage are based on multivariate data analysis (MDA), including multivariate data visualization and principal component analysis (PCA). Subtle gross errors are treated at the second stage, in which the serial bias compensation or robust M-estimator is engaged. To achieve a better efficiency in the combined scheme of the least squares based data reconciliation and the GED technique based on hypotheses testing, the Levenberg-Marquardt (LM) algorithm is utilized as the optimizer. To reduce the computation time and stabilize the problem solving for a complex power system such as a combined cycle power plant, meta-modeling using the response surface equation (RSE) and system/process decomposition are incorporated with the simultaneous scheme of SDRMC. The goal of this research work is to reduce the calibration uncertainties and, thus, the risks of providing performance guarantees arisen from uncertainties in performance simulation.

  17. Optimal land use management for soil erosion control by using an interval-parameter fuzzy two-stage stochastic programming approach.

    PubMed

    Han, Jing-Cheng; Huang, Guo-He; Zhang, Hua; Li, Zhong

    2013-09-01

    Soil erosion is one of the most serious environmental and public health problems, and such land degradation can be effectively mitigated through performing land use transitions across a watershed. Optimal land use management can thus provide a way to reduce soil erosion while achieving the maximum net benefit. However, optimized land use allocation schemes are not always successful since uncertainties pertaining to soil erosion control are not well presented. This study applied an interval-parameter fuzzy two-stage stochastic programming approach to generate optimal land use planning strategies for soil erosion control based on an inexact optimization framework, in which various uncertainties were reflected. The modeling approach can incorporate predefined soil erosion control policies, and address inherent system uncertainties expressed as discrete intervals, fuzzy sets, and probability distributions. The developed model was demonstrated through a case study in the Xiangxi River watershed, China's Three Gorges Reservoir region. Land use transformations were employed as decision variables, and based on these, the land use change dynamics were yielded for a 15-year planning horizon. Finally, the maximum net economic benefit with an interval value of [1.197, 6.311] × 10(9) $ was obtained as well as corresponding land use allocations in the three planning periods. Also, the resulting soil erosion amount was found to be decreased and controlled at a tolerable level over the watershed. Thus, results confirm that the developed model is a useful tool for implementing land use management as not only does it allow local decision makers to optimize land use allocation, but can also help to answer how to accomplish land use changes.

  18. Optimal Land Use Management for Soil Erosion Control by Using an Interval-Parameter Fuzzy Two-Stage Stochastic Programming Approach

    NASA Astrophysics Data System (ADS)

    Han, Jing-Cheng; Huang, Guo-He; Zhang, Hua; Li, Zhong

    2013-09-01

    Soil erosion is one of the most serious environmental and public health problems, and such land degradation can be effectively mitigated through performing land use transitions across a watershed. Optimal land use management can thus provide a way to reduce soil erosion while achieving the maximum net benefit. However, optimized land use allocation schemes are not always successful since uncertainties pertaining to soil erosion control are not well presented. This study applied an interval-parameter fuzzy two-stage stochastic programming approach to generate optimal land use planning strategies for soil erosion control based on an inexact optimization framework, in which various uncertainties were reflected. The modeling approach can incorporate predefined soil erosion control policies, and address inherent system uncertainties expressed as discrete intervals, fuzzy sets, and probability distributions. The developed model was demonstrated through a case study in the Xiangxi River watershed, China's Three Gorges Reservoir region. Land use transformations were employed as decision variables, and based on these, the land use change dynamics were yielded for a 15-year planning horizon. Finally, the maximum net economic benefit with an interval value of [1.197, 6.311] × 109 was obtained as well as corresponding land use allocations in the three planning periods. Also, the resulting soil erosion amount was found to be decreased and controlled at a tolerable level over the watershed. Thus, results confirm that the developed model is a useful tool for implementing land use management as not only does it allow local decision makers to optimize land use allocation, but can also help to answer how to accomplish land use changes.

  19. Temperature-assisted solute focusing with sequential trap/release zones in isocratic and gradient capillary liquid chromatography: Simulation and experiment

    PubMed Central

    Groskreutz, Stephen R.; Weber, Stephen G.

    2016-01-01

    In this work we characterize the development of a method to enhance temperature-assisted on-column solute focusing (TASF) called two-stage TASF. A new instrument was built to implement two-stage TASF consisting of a linear array of three independent, electronically controlled Peltier devices (thermoelectric coolers, TECs). Samples are loaded onto the chromatographic column with the first two TECs, TEC A and TEC B, cold. In the two-stage TASF approach TECs A and B are cooled during injection. TEC A is heated following sample loading. At some time following TEC A’s temperature rise, TEC B’s temperature is increased from the focusing temperature to a temperature matching that of TEC A. Injection bands are focused twice on-column, first on the initial TEC, e.g. single-stage TASF, then refocused on the second, cold TEC. Our goal is to understand the two-stage TASF approach in detail. We have developed a simple yet powerful digital simulation procedure to model the effect of changing temperature in the two focusing zones on retention, band shape and band spreading. The simulation can predict experimental chromatograms resulting from spatial and temporal temperature programs in combination with isocratic and solvent gradient elution. To assess the two-stage TASF method and the accuracy of the simulation well characterized solutes are needed. Thus, retention factors were measured at six temperatures (25–75 °C) at each of twelve mobile phases compositions (0.05–0.60 acetonitrile/water) for homologs of n-alkyl hydroxylbenzoate esters and n-alkyl p-hydroxyphenones. Simulations accurately reflect experimental results in showing that the two-stage approach improves separation quality. For example, two-stage TASF increased sensitivity for a low retention solute by a factor of 2.2 relative to single-stage TASF and 8.8 relative to isothermal conditions using isocratic elution. Gradient elution results for two-stage TASF were more encouraging. Application of two-stage TASF increased peak height for the least retained solute in the test mixture by a factor of 3.2 relative to single-stage TASF and 22.3 compared to isothermal conditions for an injection four-times the column volume. TASF improved resolution and increased peak capacity; for a 12-minute separation peak capacity increased from 75 under isothermal conditions to 146 using single-stage TASF, and 185 for two-stage TASF. PMID:27836226

  20. Temperature-assisted solute focusing with sequential trap/release zones in isocratic and gradient capillary liquid chromatography: Simulation and experiment.

    PubMed

    Groskreutz, Stephen R; Weber, Stephen G

    2016-11-25

    In this work we characterize the development of a method to enhance temperature-assisted on-column solute focusing (TASF) called two-stage TASF. A new instrument was built to implement two-stage TASF consisting of a linear array of three independent, electronically controlled Peltier devices (thermoelectric coolers, TECs). Samples are loaded onto the chromatographic column with the first two TECs, TEC A and TEC B, cold. In the two-stage TASF approach TECs A and B are cooled during injection. TEC A is heated following sample loading. At some time following TEC A's temperature rise, TEC B's temperature is increased from the focusing temperature to a temperature matching that of TEC A. Injection bands are focused twice on-column, first on the initial TEC, e.g. single-stage TASF, then refocused on the second, cold TEC. Our goal is to understand the two-stage TASF approach in detail. We have developed a simple yet powerful digital simulation procedure to model the effect of changing temperature in the two focusing zones on retention, band shape and band spreading. The simulation can predict experimental chromatograms resulting from spatial and temporal temperature programs in combination with isocratic and solvent gradient elution. To assess the two-stage TASF method and the accuracy of the simulation well characterized solutes are needed. Thus, retention factors were measured at six temperatures (25-75°C) at each of twelve mobile phases compositions (0.05-0.60 acetonitrile/water) for homologs of n-alkyl hydroxylbenzoate esters and n-alkyl p-hydroxyphenones. Simulations accurately reflect experimental results in showing that the two-stage approach improves separation quality. For example, two-stage TASF increased sensitivity for a low retention solute by a factor of 2.2 relative to single-stage TASF and 8.8 relative to isothermal conditions using isocratic elution. Gradient elution results for two-stage TASF were more encouraging. Application of two-stage TASF increased peak height for the least retained solute in the test mixture by a factor of 3.2 relative to single-stage TASF and 22.3 compared to isothermal conditions for an injection four-times the column volume. TASF improved resolution and increased peak capacity; for a 12-min separation peak capacity increased from 75 under isothermal conditions to 146 using single-stage TASF, and 185 for two-stage TASF. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Notes on testing equality and interval estimation in Poisson frequency data under a three-treatment three-period crossover trial.

    PubMed

    Lui, Kung-Jong; Chang, Kuang-Chao

    2016-10-01

    When the frequency of event occurrences follows a Poisson distribution, we develop procedures for testing equality of treatments and interval estimators for the ratio of mean frequencies between treatments under a three-treatment three-period crossover design. Using Monte Carlo simulations, we evaluate the performance of these test procedures and interval estimators in various situations. We note that all test procedures developed here can perform well with respect to Type I error even when the number of patients per group is moderate. We further note that the two weighted-least-squares (WLS) test procedures derived here are generally preferable to the other two commonly used test procedures in the contingency table analysis. We also demonstrate that both interval estimators based on the WLS method and interval estimators based on Mantel-Haenszel (MH) approach can perform well, and are essentially of equal precision with respect to the average length. We use a double-blind randomized three-treatment three-period crossover trial comparing salbutamol and salmeterol with a placebo with respect to the number of exacerbations of asthma to illustrate the use of these test procedures and estimators. © The Author(s) 2014.

  2. CFD modeling of two-stage ignition in a rapid compression machine: Assessment of zero-dimensional approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mittal, Gaurav; Raju, Mandhapati P.; Sung, Chih-Jen

    2010-07-15

    In modeling rapid compression machine (RCM) experiments, zero-dimensional approach is commonly used along with an associated heat loss model. The adequacy of such approach has not been validated for hydrocarbon fuels. The existence of multi-dimensional effects inside an RCM due to the boundary layer, roll-up vortex, non-uniform heat release, and piston crevice could result in deviation from the zero-dimensional assumption, particularly for hydrocarbons exhibiting two-stage ignition and strong thermokinetic interactions. The objective of this investigation is to assess the adequacy of zero-dimensional approach in modeling RCM experiments under conditions of two-stage ignition and negative temperature coefficient (NTC) response. Computational fluidmore » dynamics simulations are conducted for n-heptane ignition in an RCM and the validity of zero-dimensional approach is assessed through comparisons over the entire NTC region. Results show that the zero-dimensional model based on the approach of 'adiabatic volume expansion' performs very well in adequately predicting the first-stage ignition delays, although quantitative discrepancy for the prediction of the total ignition delays and pressure rise in the first-stage ignition is noted even when the roll-up vortex is suppressed and a well-defined homogeneous core is retained within an RCM. Furthermore, the discrepancy is pressure dependent and decreases as compressed pressure is increased. Also, as ignition response becomes single-stage at higher compressed temperatures, discrepancy from the zero-dimensional simulations reduces. Despite of some quantitative discrepancy, the zero-dimensional modeling approach is deemed satisfactory from the viewpoint of the ignition delay simulation. (author)« less

  3. A temporal discriminability account of children's eyewitness suggestibility.

    PubMed

    Bright-Paul, Alexandra; Jarrold, Christopher

    2009-07-01

    Children's suggestibility is typically measured using a three-stage 'event-misinformation-test' procedure. We examined whether suggestibility is influenced by the time delays imposed between these stages, and in particular whether the temporal discriminability of sources (event and misinformation) predicts performance. In a novel approach, the degree of source discriminability was calculated as the relative magnitude of two intervals (the ratio of event-misinformation and misinformation-test intervals), based on an adaptation of existing 'ratio-rule' accounts of memory. Five-year-olds (n =150) watched an event, and were exposed to misinformation, before memory for source was tested. The absolute event-test delay (12 versus 24 days) and the 'ratio' of event-misinformation/misinformation-test intervals (11:1, 3:1, 1:1, 1:3 and 1:11) were manipulated across participants. The temporal discriminability of sources, measured by the ratio, was indeed a strong predictor of suggestibility. Most importantly, if the ratio was constant (e.g. 18/6 versus 9/3 days), performance was remarkably similar despite variations in absolute delay (e.g. 24 versus 12 days). This intriguing finding not only extends the ratio-rule of distinctiveness to misinformation paradigms, but also serves to illustrate a new empirical means of differentiating between explanations of suggestibility based on interference between sources and disintegration of source information over time.

  4. CLVTOPS Liftoff and Separation Analysis Validation Using Ares I-X Flight Data

    NASA Technical Reports Server (NTRS)

    Burger, Ben; Schwarz, Kristina; Kim, Young

    2011-01-01

    CLVTOPS is a multi-body time domain flight dynamics simulation tool developed by NASA s Marshall Space Flight Center (MSFC) for a space launch vehicle and is based on the TREETOPS simulation tool. CLVTOPS is currently used to simulate the flight dynamics and separation/jettison events of the Ares I launch vehicle including liftoff and staging separation. In order for CLVTOPS to become an accredited tool, validation against other independent simulations and real world data is needed. The launch of the Ares I-X vehicle (first Ares I test flight) on October 28, 2009 presented a great opportunity to provide validation evidence for CLVTOPS. In order to simulate the Ares I-X flight, specific models were implemented into CLVTOPS. These models include the flight day environment, reconstructed thrust, reconstructed mass properties, aerodynamics, and the Ares I-X guidance, navigation and control models. The resulting simulation output was compared to Ares I-X flight data. During the liftoff region of flight, trajectory states from the simulation and flight data were compared. The CLVTOPS results were used to make a semi-transparent animation of the vehicle that was overlaid directly on top of the flight video to provide a qualitative measure of the agreement between the simulation and the actual flight. During ascent, the trajectory states of the vehicle were compared with flight data. For the stage separation event, the trajectory states of the two stages were compared to available flight data. Since no quantitative rotational state data for the upper stage was available, the CLVTOPS results were used to make an animation of the two stages to show a side-by-side comparison with flight video. All of the comparisons between CLVTOPS and the flight data show good agreement. This paper documents comparisons between CLVTOPS and Ares I-X flight data which serve as validation evidence for the eventual accreditation of CLVTOPS.

  5. Cellular Spacing Selection During the Directional Solidification of Binary Alloys. A Numerical Approach

    NASA Technical Reports Server (NTRS)

    Catalina, Adrian V.; Sen, S.; Rose, M. Franklin (Technical Monitor)

    2001-01-01

    The evolution of cellular solid/liquid interfaces from an initially unstable planar front was studied by means of a two-dimensional computer simulation. The developed numerical model makes use of an interface tracking procedure and has the capability to describe the dynamics of the interface morphology based on local changes of the thermodynamic conditions. The fundamental physics of this formulation was validated against experimental microgravity results and the predictions of the analytical linear stability theory. The performed simulations revealed that in certain conditions, based on a competitive growth mechanism, an interface could become unstable to random perturbations of infinitesimal amplitude even at wavelengths smaller than the neutral wavelength, lambda(sub c), predicted by the linear stability theory. Furthermore, two main stages of spacing selection have been identified. In the first stage, at low perturbations amplitude, the selection mechanism is driven by the maximum growth rate of instabilities while in the second stage the selection is influenced by nonlinear phenomena caused by the interactions between the neighboring cells. Comparison of these predictions with other existing theories of pattern formation and experimental results will be discussed.

  6. Interleukin-1β gene variants are associated with QTc interval prolongation following cardiac surgery: a prospective observational study.

    PubMed

    Kertai, Miklos D; Ji, Yunqi; Li, Yi-Ju; Mathew, Joseph P; Daubert, James P; Podgoreanu, Mihai V

    2016-04-01

    We characterized cardiac surgery-induced dynamic changes of the corrected QT (QTc) interval and tested the hypothesis that genetic factors are associated with perioperative QTc prolongation independent of clinical and procedural factors. All study subjects were ascertained from a prospective study of patients who underwent elective cardiac surgery during August 1999 to April 2002. We defined a prolonged QTc interval as > 440 msec, measured from 24-hr pre- and postoperative 12-lead electrocardiograms. The association of 37 single nucleotide polymorphisms (SNPs) in 21 candidate genes -involved in modulating arrhythmia susceptibility pathways with postoperative QTc changes- was investigated in a two-stage design with a stage I cohort (n = 497) nested within a stage II cohort (n = 957). Empirical P values (Pemp) were obtained by permutation tests with 10,000 repeats. After adjusting for clinical and procedural risk factors, we selected four SNPs (P value range, 0.03-0.1) in stage I, which we then tested in the stage II cohort. Two functional SNPs in the pro-inflammatory cytokine interleukin-1β (IL1β), rs1143633 (odds ratio [OR], 0.71; 95% confidence interval [CI], 0.53 to 0.95; Pemp = 0.02) and rs16944 (OR, 1.31; 95% CI, 1.01 to 1.70; Pemp = 0.04), remained independent predictors of postoperative QTc prolongation. The ability of a clinico-genetic model incorporating the two IL1B polymorphisms to classify patients at risk for developing prolonged postoperative QTc was superior to a clinical model alone, with a net reclassification improvement of 0.308 (P = 0.0003) and an integrated discrimination improvement of 0.02 (P = 0.000024). The results suggest a contribution of IL1β in modulating susceptibility to postoperative QTc prolongation after cardiac surgery.

  7. Using a fuzzy comprehensive evaluation method to determine product usability: A test case

    PubMed Central

    Zhou, Ronggang; Chan, Alan H. S.

    2016-01-01

    BACKGROUND: In order to take into account the inherent uncertainties during product usability evaluation, Zhou and Chan [1] proposed a comprehensive method of usability evaluation for products by combining the analytic hierarchy process (AHP) and fuzzy evaluation methods for synthesizing performance data and subjective response data. This method was designed to provide an integrated framework combining the inevitable vague judgments from the multiple stages of the product evaluation process. OBJECTIVE AND METHODS: In order to illustrate the effectiveness of the model, this study used a summative usability test case to assess the application and strength of the general fuzzy usability framework. To test the proposed fuzzy usability evaluation framework [1], a standard summative usability test was conducted to benchmark the overall usability of a specific network management software. Based on the test data, the fuzzy method was applied to incorporate both the usability scores and uncertainties involved in the multiple components of the evaluation. Then, with Monte Carlo simulation procedures, confidence intervals were used to compare the reliabilities among the fuzzy approach and two typical conventional methods combining metrics based on percentages. RESULTS AND CONCLUSIONS: This case study showed that the fuzzy evaluation technique can be applied successfully for combining summative usability testing data to achieve an overall usability quality for the network software evaluated. Greater differences of confidence interval widths between the method of averaging equally percentage and weighted evaluation method, including the method of weighted percentage averages, verified the strength of the fuzzy method. PMID:28035942

  8. Using a fuzzy comprehensive evaluation method to determine product usability: A test case.

    PubMed

    Zhou, Ronggang; Chan, Alan H S

    2017-01-01

    In order to take into account the inherent uncertainties during product usability evaluation, Zhou and Chan [1] proposed a comprehensive method of usability evaluation for products by combining the analytic hierarchy process (AHP) and fuzzy evaluation methods for synthesizing performance data and subjective response data. This method was designed to provide an integrated framework combining the inevitable vague judgments from the multiple stages of the product evaluation process. In order to illustrate the effectiveness of the model, this study used a summative usability test case to assess the application and strength of the general fuzzy usability framework. To test the proposed fuzzy usability evaluation framework [1], a standard summative usability test was conducted to benchmark the overall usability of a specific network management software. Based on the test data, the fuzzy method was applied to incorporate both the usability scores and uncertainties involved in the multiple components of the evaluation. Then, with Monte Carlo simulation procedures, confidence intervals were used to compare the reliabilities among the fuzzy approach and two typical conventional methods combining metrics based on percentages. This case study showed that the fuzzy evaluation technique can be applied successfully for combining summative usability testing data to achieve an overall usability quality for the network software evaluated. Greater differences of confidence interval widths between the method of averaging equally percentage and weighted evaluation method, including the method of weighted percentage averages, verified the strength of the fuzzy method.

  9. Accuracy of lung nodule density on HRCT: analysis by PSF-based image simulation.

    PubMed

    Ohno, Ken; Ohkubo, Masaki; Marasinghe, Janaka C; Murao, Kohei; Matsumoto, Toru; Wada, Shinichi

    2012-11-08

    A computed tomography (CT) image simulation technique based on the point spread function (PSF) was applied to analyze the accuracy of CT-based clinical evaluations of lung nodule density. The PSF of the CT system was measured and used to perform the lung nodule image simulation. Then, the simulated image was resampled at intervals equal to the pixel size and the slice interval found in clinical high-resolution CT (HRCT) images. On those images, the nodule density was measured by placing a region of interest (ROI) commonly used for routine clinical practice, and comparing the measured value with the true value (a known density of object function used in the image simulation). It was quantitatively determined that the measured nodule density depended on the nodule diameter and the image reconstruction parameters (kernel and slice thickness). In addition, the measured density fluctuated, depending on the offset between the nodule center and the image voxel center. This fluctuation was reduced by decreasing the slice interval (i.e., with the use of overlapping reconstruction), leading to a stable density evaluation. Our proposed method of PSF-based image simulation accompanied with resampling enables a quantitative analysis of the accuracy of CT-based evaluations of lung nodule density. These results could potentially reveal clinical misreadings in diagnosis, and lead to more accurate and precise density evaluations. They would also be of value for determining the optimum scan and reconstruction parameters, such as image reconstruction kernels and slice thicknesses/intervals.

  10. Accuracy of MHD simulations: Effects of simulation initialization in GUMICS-4

    NASA Astrophysics Data System (ADS)

    Lakka, Antti; Pulkkinen, Tuija; Dimmock, Andrew; Osmane, Adnane; Palmroth, Minna; Honkonen, Ilja

    2016-04-01

    We conducted a study aimed at revealing how different global magnetohydrodynamic (MHD) simulation initialization methods affect the dynamics in different parts of the Earth's magnetosphere-ionosphere system. While such magnetosphere-ionosphere coupling codes have been used for more than two decades, their testing still requires significant work to identify the optimal numerical representation of the physical processes. We used the Grand Unified Magnetosphere-Ionosphere Coupling Simulation (GUMICS-4), the only European global MHD simulation being developed by the Finnish Meteorological Institute. GUMICS-4 was put to a test that included two stages: 1) a 10 day Omni data interval was simulated and the results were validated by comparing both the bow shock and the magnetopause spatial positions predicted by the simulation to actual measurements and 2) the validated 10 day simulation run was used as a reference in a comparison of five 3 + 12 hour (3 hour synthetic initialisation + 12 hour actual simulation) simulation runs. The 12 hour input was not only identical in each simulation case but it also represented a subset of the 10 day input thus enabling quantifying the effects of different synthetic initialisations on the magnetosphere-ionosphere system. The used synthetic initialisation data sets were created using stepwise, linear and sinusoidal functions. Switching the used input from the synthetic to real Omni data was immediate. The results show that the magnetosphere forms in each case within an hour after the switch to real data. However, local dissimilarities are found in the magnetospheric dynamics after formation depending on the used initialisation method. This is evident especially in the inner parts of the lobe.

  11. Two-stage damage diagnosis based on the distance between ARMA models and pre-whitening filters

    NASA Astrophysics Data System (ADS)

    Zheng, H.; Mita, A.

    2007-10-01

    This paper presents a two-stage damage diagnosis strategy for damage detection and localization. Auto-regressive moving-average (ARMA) models are fitted to time series of vibration signals recorded by sensors. In the first stage, a novel damage indicator, which is defined as the distance between ARMA models, is applied to damage detection. This stage can determine the existence of damage in the structure. Such an algorithm uses output only and does not require operator intervention. Therefore it can be embedded in the sensor board of a monitoring network. In the second stage, a pre-whitening filter is used to minimize the cross-correlation of multiple excitations. With this technique, the damage indicator can further identify the damage location and severity when the damage has been detected in the first stage. The proposed methodology is tested using simulation and experimental data. The analysis results clearly illustrate the feasibility of the proposed two-stage damage diagnosis methodology.

  12. Application of Kalman filter in frequency offset estimation for coherent optical quadrature phase-shift keying communication system

    NASA Astrophysics Data System (ADS)

    Jiang, Wen; Yang, Yanfu; Zhang, Qun; Sun, Yunxu; Zhong, Kangping; Zhou, Xian; Yao, Yong

    2016-09-01

    The frequency offset estimation (FOE) schemes based on Kalman filter are proposed and investigated in detail via numerical simulation and experiment. The schemes consist of a modulation phase removing stage and Kalman filter estimation stage. In the second stage, the Kalman filters are employed for tracking either differential angles or differential data between two successive symbols. Several implementations of the proposed FOE scheme are compared by employing different modulation removing methods and two Kalman algorithms. The optimal FOE implementation is suggested for different operating conditions including optical signal-to-noise ratio and the number of the available data symbols.

  13. Influence of dispatching rules on average production lead time for multi-stage production systems.

    PubMed

    Hübl, Alexander; Jodlbauer, Herbert; Altendorfer, Klaus

    2013-08-01

    In this paper the influence of different dispatching rules on the average production lead time is investigated. Two theorems based on covariance between processing time and production lead time are formulated and proved theoretically. Theorem 1 links the average production lead time to the "processing time weighted production lead time" for the multi-stage production systems analytically. The influence of different dispatching rules on average lead time, which is well known from simulation and empirical studies, can be proved theoretically in Theorem 2 for a single stage production system. A simulation study is conducted to gain more insight into the influence of dispatching rules on average production lead time in a multi-stage production system. We find that the "processing time weighted average production lead time" for a multi-stage production system is not invariant of the applied dispatching rule and can be used as a dispatching rule independent indicator for single-stage production systems.

  14. Diagnosis Of Persistent Infection In Prosthetic Two-Stage Exchange: PCR analysis of Sonication fluid From Bone Cement Spacers.

    PubMed

    Mariaux, Sandrine; Tafin, Ulrika Furustrand; Borens, Olivier

    2017-01-01

    Introduction: When treating periprosthetic joint infections with a two-stage procedure, antibiotic-impregnated spacers are used in the interval between removal of prosthesis and reimplantation. According to our experience, cultures of sonicated spacers are most often negative. The objective of our study was to investigate whether PCR analysis would improve the detection of bacteria in the spacer sonication fluid. Methods: A prospective monocentric study was performed from September 2014 to January 2016. Inclusion criteria were two-stage procedure for prosthetic infection and agreement of the patient to participate in the study. Beside tissues samples and sonication, broad range bacterial PCRs, specific S. aureus PCRs and Unyvero-multiplex PCRs were performed on the sonicated spacer fluid. Results: 30 patients were identified (15 hip, 14 knee and 1 ankle replacements). At reimplantation, cultures of tissue samples and spacer sonication fluid were all negative. Broad range PCRs were all negative. Specific S. aureus PCRs were positive in 5 cases. We had two persistent infections and four cases of infection recurrence were observed, with bacteria different than for the initial infection in three cases. Conclusion: The three different types of PCRs did not detect any bacteria in spacer sonication fluid that was culture-negative. In our study, PCR did not improve the bacterial detection and did not help to predict whether the patient will present a persistent or recurrent infection. Prosthetic 2-stage exchange with short interval and antibiotic-impregnated spacer is an efficient treatment to eradicate infection as both culture- and molecular-based methods were unable to detect bacteria in spacer sonication fluid after reimplantation.

  15. Robustness-Based Design Optimization Under Data Uncertainty

    NASA Technical Reports Server (NTRS)

    Zaman, Kais; McDonald, Mark; Mahadevan, Sankaran; Green, Lawrence

    2010-01-01

    This paper proposes formulations and algorithms for design optimization under both aleatory (i.e., natural or physical variability) and epistemic uncertainty (i.e., imprecise probabilistic information), from the perspective of system robustness. The proposed formulations deal with epistemic uncertainty arising from both sparse and interval data without any assumption about the probability distributions of the random variables. A decoupled approach is proposed in this paper to un-nest the robustness-based design from the analysis of non-design epistemic variables to achieve computational efficiency. The proposed methods are illustrated for the upper stage design problem of a two-stage-to-orbit (TSTO) vehicle, where the information on the random design inputs are only available as sparse point and/or interval data. As collecting more data reduces uncertainty but increases cost, the effect of sample size on the optimality and robustness of the solution is also studied. A method is developed to determine the optimal sample size for sparse point data that leads to the solutions of the design problem that are least sensitive to variations in the input random variables.

  16. Waste management with recourse: an inexact dynamic programming model containing fuzzy boundary intervals in objectives and constraints.

    PubMed

    Tan, Q; Huang, G H; Cai, Y P

    2010-09-01

    The existing inexact optimization methods based on interval-parameter linear programming can hardly address problems where coefficients in objective functions are subject to dual uncertainties. In this study, a superiority-inferiority-based inexact fuzzy two-stage mixed-integer linear programming (SI-IFTMILP) model was developed for supporting municipal solid waste management under uncertainty. The developed SI-IFTMILP approach is capable of tackling dual uncertainties presented as fuzzy boundary intervals (FuBIs) in not only constraints, but also objective functions. Uncertainties expressed as a combination of intervals and random variables could also be explicitly reflected. An algorithm with high computational efficiency was provided to solve SI-IFTMILP. SI-IFTMILP was then applied to a long-term waste management case to demonstrate its applicability. Useful interval solutions were obtained. SI-IFTMILP could help generate dynamic facility-expansion and waste-allocation plans, as well as provide corrective actions when anticipated waste management plans are violated. It could also greatly reduce system-violation risk and enhance system robustness through examining two sets of penalties resulting from variations in fuzziness and randomness. Moreover, four possible alternative models were formulated to solve the same problem; solutions from them were then compared with those from SI-IFTMILP. The results indicate that SI-IFTMILP could provide more reliable solutions than the alternatives. 2010 Elsevier Ltd. All rights reserved.

  17. Multiple imputation methods for nonparametric inference on cumulative incidence with missing cause of failure

    PubMed Central

    Lee, Minjung; Dignam, James J.; Han, Junhee

    2014-01-01

    We propose a nonparametric approach for cumulative incidence estimation when causes of failure are unknown or missing for some subjects. Under the missing at random assumption, we estimate the cumulative incidence function using multiple imputation methods. We develop asymptotic theory for the cumulative incidence estimators obtained from multiple imputation methods. We also discuss how to construct confidence intervals for the cumulative incidence function and perform a test for comparing the cumulative incidence functions in two samples with missing cause of failure. Through simulation studies, we show that the proposed methods perform well. The methods are illustrated with data from a randomized clinical trial in early stage breast cancer. PMID:25043107

  18. Correcting for bias in the selection and validation of informative diagnostic tests.

    PubMed

    Robertson, David S; Prevost, A Toby; Bowden, Jack

    2015-04-15

    When developing a new diagnostic test for a disease, there are often multiple candidate classifiers to choose from, and it is unclear if any will offer an improvement in performance compared with current technology. A two-stage design can be used to select a promising classifier (if one exists) in stage one for definitive validation in stage two. However, estimating the true properties of the chosen classifier is complicated by the first stage selection rules. In particular, the usual maximum likelihood estimator (MLE) that combines data from both stages will be biased high. Consequently, confidence intervals and p-values flowing from the MLE will also be incorrect. Building on the results of Pepe et al. (SIM 28:762-779), we derive the most efficient conditionally unbiased estimator and exact confidence intervals for a classifier's sensitivity in a two-stage design with arbitrary selection rules; the condition being that the trial proceeds to the validation stage. We apply our estimation strategy to data from a recent family history screening tool validation study by Walter et al. (BJGP 63:393-400) and are able to identify and successfully adjust for bias in the tool's estimated sensitivity to detect those at higher risk of breast cancer. © 2015 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  19. Cure modeling in real-time prediction: How much does it help?

    PubMed

    Ying, Gui-Shuang; Zhang, Qiang; Lan, Yu; Li, Yimei; Heitjan, Daniel F

    2017-08-01

    Various parametric and nonparametric modeling approaches exist for real-time prediction in time-to-event clinical trials. Recently, Chen (2016 BMC Biomedical Research Methodology 16) proposed a prediction method based on parametric cure-mixture modeling, intending to cover those situations where it appears that a non-negligible fraction of subjects is cured. In this article we apply a Weibull cure-mixture model to create predictions, demonstrating the approach in RTOG 0129, a randomized trial in head-and-neck cancer. We compare the ultimate realized data in RTOG 0129 to interim predictions from a Weibull cure-mixture model, a standard Weibull model without a cure component, and a nonparametric model based on the Bayesian bootstrap. The standard Weibull model predicted that events would occur earlier than the Weibull cure-mixture model, but the difference was unremarkable until late in the trial when evidence for a cure became clear. Nonparametric predictions often gave undefined predictions or infinite prediction intervals, particularly at early stages of the trial. Simulations suggest that cure modeling can yield better-calibrated prediction intervals when there is a cured component, or the appearance of a cured component, but at a substantial cost in the average width of the intervals. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. More accurate, calibrated bootstrap confidence intervals for correlating two autocorrelated climate time series

    NASA Astrophysics Data System (ADS)

    Olafsdottir, Kristin B.; Mudelsee, Manfred

    2013-04-01

    Estimation of the Pearson's correlation coefficient between two time series to evaluate the influences of one time depended variable on another is one of the most often used statistical method in climate sciences. Various methods are used to estimate confidence interval to support the correlation point estimate. Many of them make strong mathematical assumptions regarding distributional shape and serial correlation, which are rarely met. More robust statistical methods are needed to increase the accuracy of the confidence intervals. Bootstrap confidence intervals are estimated in the Fortran 90 program PearsonT (Mudelsee, 2003), where the main intention was to get an accurate confidence interval for correlation coefficient between two time series by taking the serial dependence of the process that generated the data into account. However, Monte Carlo experiments show that the coverage accuracy for smaller data sizes can be improved. Here we adapt the PearsonT program into a new version called PearsonT3, by calibrating the confidence interval to increase the coverage accuracy. Calibration is a bootstrap resampling technique, which basically performs a second bootstrap loop or resamples from the bootstrap resamples. It offers, like the non-calibrated bootstrap confidence intervals, robustness against the data distribution. Pairwise moving block bootstrap is used to preserve the serial correlation of both time series. The calibration is applied to standard error based bootstrap Student's t confidence intervals. The performances of the calibrated confidence intervals are examined with Monte Carlo simulations, and compared with the performances of confidence intervals without calibration, that is, PearsonT. The coverage accuracy is evidently better for the calibrated confidence intervals where the coverage error is acceptably small (i.e., within a few percentage points) already for data sizes as small as 20. One form of climate time series is output from numerical models which simulate the climate system. The method is applied to model data from the high resolution ocean model, INALT01 where the relationship between the Agulhas Leakage and the North Brazil Current is evaluated. Preliminary results show significant correlation between the two variables when there is 10 year lag between them, which is more or less the time that takes the Agulhas Leakage water to reach the North Brazil Current. Mudelsee, M., 2003. Estimating Pearson's correlation coefficient with bootstrap confidence interval from serially dependent time series. Mathematical Geology 35, 651-665.

  1. Hybrid neuro-heuristic methodology for simulation and control of dynamic systems over time interval.

    PubMed

    Woźniak, Marcin; Połap, Dawid

    2017-09-01

    Simulation and positioning are very important aspects of computer aided engineering. To process these two, we can apply traditional methods or intelligent techniques. The difference between them is in the way they process information. In the first case, to simulate an object in a particular state of action, we need to perform an entire process to read values of parameters. It is not very convenient for objects for which simulation takes a long time, i.e. when mathematical calculations are complicated. In the second case, an intelligent solution can efficiently help on devoted way of simulation, which enables us to simulate the object only in a situation that is necessary for a development process. We would like to present research results on developed intelligent simulation and control model of electric drive engine vehicle. For a dedicated simulation method based on intelligent computation, where evolutionary strategy is simulating the states of the dynamic model, an intelligent system based on devoted neural network is introduced to control co-working modules while motion is in time interval. Presented experimental results show implemented solution in situation when a vehicle transports things over area with many obstacles, what provokes sudden changes in stability that may lead to destruction of load. Therefore, applied neural network controller prevents the load from destruction by positioning characteristics like pressure, acceleration, and stiffness voltage to absorb the adverse changes of the ground. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Empirical likelihood-based confidence intervals for mean medical cost with censored data.

    PubMed

    Jeyarajah, Jenny; Qin, Gengsheng

    2017-11-10

    In this paper, we propose empirical likelihood methods based on influence function and jackknife techniques for constructing confidence intervals for mean medical cost with censored data. We conduct a simulation study to compare the coverage probabilities and interval lengths of our proposed confidence intervals with that of the existing normal approximation-based confidence intervals and bootstrap confidence intervals. The proposed methods have better finite-sample performances than existing methods. Finally, we illustrate our proposed methods with a relevant example. Copyright © 2017 John Wiley & Sons, Ltd.

  3. Status of simulation in health care education: an international survey

    PubMed Central

    Qayumi, Karim; Pachev, George; Zheng, Bin; Ziv, Amitai; Koval, Valentyna; Badiei, Sadia; Cheng, Adam

    2014-01-01

    Simulation is rapidly penetrating the terrain of health care education and has gained growing acceptance as an educational method and patient safety tool. Despite this, the state of simulation in health care education has not yet been evaluated on a global scale. In this project, we studied the global status of simulation in health care education by determining the degree of financial support, infrastructure, manpower, information technology capabilities, engagement of groups of learners, and research and scholarly activities, as well as the barriers, strengths, opportunities for growth, and other aspects of simulation in health care education. We utilized a two-stage process, including an online survey and a site visit that included interviews and debriefings. Forty-two simulation centers worldwide participated in this study, the results of which show that despite enormous interest and enthusiasm in the health care community, use of simulation in health care education is limited to specific areas and is not a budgeted item in many institutions. Absence of a sustainable business model, as well as sufficient financial support in terms of budget, infrastructure, manpower, research, and scholarly activities, slows down the movement of simulation. Specific recommendations are made based on current findings to support simulation in the next developmental stages. PMID:25489254

  4. Identifying Issues and Concerns with the Use of Interval-Based Systems in Single Case Research Using a Pilot Simulation Study

    ERIC Educational Resources Information Center

    Ledford, Jennifer R.; Ayres, Kevin M.; Lane, Justin D.; Lam, Man Fung

    2015-01-01

    Momentary time sampling (MTS), whole interval recording (WIR), and partial interval recording (PIR) are commonly used in applied research. We discuss potential difficulties with analyzing data when these systems are used and present results from a pilot simulation study designed to determine the extent to which these issues are likely to be…

  5. Validating Pseudo-dynamic Source Models against Observed Ground Motion Data at the SCEC Broadband Platform, Ver 16.5

    NASA Astrophysics Data System (ADS)

    Song, S. G.

    2016-12-01

    Simulation-based ground motion prediction approaches have several benefits over empirical ground motion prediction equations (GMPEs). For instance, full 3-component waveforms can be produced and site-specific hazard analysis is also possible. However, it is important to validate them against observed ground motion data to confirm their efficiency and validity before practical uses. There have been community efforts for these purposes, which are supported by the Broadband Platform (BBP) project at the Southern California Earthquake Center (SCEC). In the simulation-based ground motion prediction approaches, it is a critical element to prepare a possible range of scenario rupture models. I developed a pseudo-dynamic source model for Mw 6.5-7.0 by analyzing a number of dynamic rupture models, based on 1-point and 2-point statistics of earthquake source parameters (Song et al. 2014; Song 2016). In this study, the developed pseudo-dynamic source models were tested against observed ground motion data at the SCEC BBP, Ver 16.5. The validation was performed at two stages. At the first stage, simulated ground motions were validated against observed ground motion data for past events such as the 1992 Landers and 1994 Northridge, California, earthquakes. At the second stage, they were validated against the latest version of empirical GMPEs, i.e., NGA-West2. The validation results show that the simulated ground motions produce ground motion intensities compatible with observed ground motion data at both stages. The compatibility of the pseudo-dynamic source models with the omega-square spectral decay and the standard deviation of the simulated ground motion intensities are also discussed in the study

  6. Launch Condition Deviations of Reusable Launch Vehicle Simulations in Exo-Atmospheric Zoom Climbs

    NASA Technical Reports Server (NTRS)

    Urschel, Peter H.; Cox, Timothy H.

    2003-01-01

    The Defense Advanced Research Projects Agency has proposed a two-stage system to deliver a small payload to orbit. The proposal calls for an airplane to perform an exo-atmospheric zoom climb maneuver, from which a second-stage rocket is launched carrying the payload into orbit. The NASA Dryden Flight Research Center has conducted an in-house generic simulation study to determine how accurately a human-piloted airplane can deliver a second-stage rocket to a desired exo-atmospheric launch condition. A high-performance, fighter-type, fixed-base, real-time, pilot-in-the-loop airplane simulation has been modified to perform exo-atmospheric zoom climb maneuvers. Four research pilots tracked a reference trajectory in the presence of winds, initial offsets, and degraded engine thrust to a second-stage launch condition. These launch conditions have been compared to the reference launch condition to characterize the expected deviation. At each launch condition, a speed change was applied to the second-stage rocket to insert the payload onto a transfer orbit to the desired operational orbit. The most sensitive of the test cases was the degraded thrust case, yielding second-stage launch energies that were too low to achieve the radius of the desired operational orbit. The handling qualities of the airplane, as a first-stage vehicle, have also been investigated.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Xiaoyao; Hall, Randall W.; Löffler, Frank

    The Sign Learning Kink (SiLK) based Quantum Monte Carlo (QMC) method is used to calculate the ab initio ground state energies for multiple geometries of the H2O, N2, and F2 molecules. The method is based on Feynman’s path integral formulation of quantum mechanics and has two stages. The first stage is called the learning stage and reduces the well-known QMC minus sign problem by optimizing the linear combinations of Slater determinants which are used in the second stage, a conventional QMC simulation. The method is tested using different vector spaces and compared to the results of other quantum chemical methodsmore » and to exact diagonalization. Our findings demonstrate that the SiLK method is accurate and reduces or eliminates the minus sign problem.« less

  8. Effect of the size of nanoparticles on their dissolution within metal-glass nanocomposites under sustained irradiation

    NASA Astrophysics Data System (ADS)

    Vu, T. H. Y.; Ramjauny, Y.; Rizza, G.; Hayoun, M.

    2016-01-01

    We investigate the dissolution law of metallic nanoparticles (NPs) under sustained irradiation. The system is composed of isolated spherical gold NPs (4-100 nm) embedded in an amorphous silica host matrix. Samples are irradiated at room temperature in the nuclear stopping power regime with 4 MeV Au ions for fluences up to 8 × 1016 cm-2. Experimentally, the dependence of the dissolution kinetics on the irradiation fluence is linear for large NPs (45-100 nm) and exponential for small NPs (4-25 nm). A lattice-based kinetic Monte Carlo (KMC) code, which includes atomic diffusion and ballistic displacement events, is used to simulate the dynamical competition between irradiation effects and thermal healing. The KMC simulations allow for a qualitative description of the NP dissolution in two main stages, in good agreement with the experiment. Moreover, the perfect correlation obtained between the evolution of the simulated flux of ejected atoms and the dissolution rate in two stages implies that there exists an effect of the size of NPs on their dissolution and a critical size for the transition between the two stages. The Frost-Russell model providing an analytical solution for the dissolution rate, accounts well for the first dissolution stage but fails in reproducing the data for the second stage. An improved model obtained by including a size-dependent recoil generation rate permits fully describing the dissolution for any NP size. This proves, in particular, that the size effect on the generation rate is the principal reason for the existence of two regimes. Finally, our results also demonstrate that it is justified to use a unidirectional approximation to describe the dissolution of the NP under irradiation, because the solute concentration is particularly low in metal-glass nanocomposites.

  9. The causes of recurrent geomagnetic storms

    NASA Technical Reports Server (NTRS)

    Burlaga, L. F.; Lepping, R. P.

    1976-01-01

    The causes of recurrent geomagnetic activity were studied by analyzing interplanetary magnetic field and plasma data from earth-orbiting spacecraft in the interval from November 1973 to February 1974. This interval included the start of two long sequences of geomagnetic activity and two corresponding corotating interplanetary streams. In general, the geomagnetic activity was related to an electric field which was due to two factors: (1) the ordered, mesoscale pattern of the stream itself, and (2) random, smaller-scale fluctuations in the southward component of the interplanetary magnetic field Bz. The geomagnetic activity in each recurrent sequence consisted of two successive stages. The first stage was usually the most intense, and it occurred during the passage of the interaction region at the front of a stream. These large amplitudes of Bz were primarily produced in the interplanetary medium by compression of ambient fluctuations as the stream steepened in transit to 1 A.U. The second stage of geomagnetic activity immediately following the first was associated with the highest speeds in the stream.

  10. Flood-inundation maps for the Tippecanoe River at Winamac, Indiana

    USGS Publications Warehouse

    Menke, Chad D.; Bunch, Aubrey R.

    2015-09-25

    For this study, flood profiles were computed for the Tippecanoe River reach by means of a one-dimensional step-backwater model. The hydraulic model was calibrated by using the most current stage-discharge relations at the Tippecanoe River streamgage, in combination with the current (2014) Federal Emergency Management Agency flood-insurance study for Pulaski County. The calibrated hydraulic model was then used to determine nine water-surface profiles for flood stages at 1-foot intervals referenced to the streamgage datum and ranging from bankfull to the highest stage of the current stage-discharge rating curve. The 1-percent annual exceedance probability (AEP) flood stage (flood with recurrence intervals within 100 years) has not been determined yet for this streamgage location. The rating has not been developed for the 1-percent AEP because the streamgage dates to only 2001. The simulated water-surface profiles were then used with a geographic information system (GIS) digital elevation model (DEM, derived from Light Detection and Ranging [lidar]) in order to delineate the area flooded at each water level. The availability of these maps, along with Internet information regarding current stage from the USGS streamgage 03331753, Tippecanoe River at Winamac, Ind., and forecast stream stages from the NWS AHPS, provides emergency management personnel and residents with information that is critical for flood response activities such as evacuations and road closures, as well as for post-flood recovery efforts.

  11. An approach for sample size determination of average bioequivalence based on interval estimation.

    PubMed

    Chiang, Chieh; Hsiao, Chin-Fu

    2017-03-30

    In 1992, the US Food and Drug Administration declared that two drugs demonstrate average bioequivalence (ABE) if the log-transformed mean difference of pharmacokinetic responses lies in (-0.223, 0.223). The most widely used approach for assessing ABE is the two one-sided tests procedure. More specifically, ABE is concluded when a 100(1 - 2α) % confidence interval for mean difference falls within (-0.223, 0.223). As known, bioequivalent studies are usually conducted by crossover design. However, in the case that the half-life of a drug is long, a parallel design for the bioequivalent study may be preferred. In this study, a two-sided interval estimation - such as Satterthwaite's, Cochran-Cox's, or Howe's approximations - is used for assessing parallel ABE. We show that the asymptotic joint distribution of the lower and upper confidence limits is bivariate normal, and thus the sample size can be calculated based on the asymptotic power so that the confidence interval falls within (-0.223, 0.223). Simulation studies also show that the proposed method achieves sufficient empirical power. A real example is provided to illustrate the proposed method. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  12. Reliability of confidence intervals calculated by bootstrap and classical methods using the FIA 1-ha plot design

    Treesearch

    H. T. Schreuder; M. S. Williams

    2000-01-01

    In simulation sampling from forest populations using sample sizes of 20, 40, and 60 plots respectively, confidence intervals based on the bootstrap (accelerated, percentile, and t-distribution based) were calculated and compared with those based on the classical t confidence intervals for mapped populations and subdomains within those populations. A 68.1 ha mapped...

  13. Epistemic uncertainty propagation in energy flows between structural vibrating systems

    NASA Astrophysics Data System (ADS)

    Xu, Menghui; Du, Xiaoping; Qiu, Zhiping; Wang, Chong

    2016-03-01

    A dimension-wise method for predicting fuzzy energy flows between structural vibrating systems coupled by joints with epistemic uncertainties is established. Based on its Legendre polynomial approximation at α=0, both the minimum and maximum point vectors of the energy flow of interest are calculated dimension by dimension within the space spanned by the interval parameters determined by fuzzy those at α=0 and the resulted interval bounds are used to assemble the concerned fuzzy energy flows. Besides the proposed method, vertex method as well as two current methods is also applied. Comparisons among results by different methods are accomplished by two numerical examples and the accuracy of all methods is simultaneously verified by Monte Carlo simulation.

  14. New Multi-objective Uncertainty-based Algorithm for Water Resource Models' Calibration

    NASA Astrophysics Data System (ADS)

    Keshavarz, Kasra; Alizadeh, Hossein

    2017-04-01

    Water resource models are powerful tools to support water management decision making process and are developed to deal with a broad range of issues including land use and climate change impacts analysis, water allocation, systems design and operation, waste load control and allocation, etc. These models are divided into two categories of simulation and optimization models whose calibration has been addressed in the literature where great relevant efforts in recent decades have led to two main categories of auto-calibration methods of uncertainty-based algorithms such as GLUE, MCMC and PEST and optimization-based algorithms including single-objective optimization such as SCE-UA and multi-objective optimization such as MOCOM-UA and MOSCEM-UA. Although algorithms which benefit from capabilities of both types, such as SUFI-2, were rather developed, this paper proposes a new auto-calibration algorithm which is capable of both finding optimal parameters values regarding multiple objectives like optimization-based algorithms and providing interval estimations of parameters like uncertainty-based algorithms. The algorithm is actually developed to improve quality of SUFI-2 results. Based on a single-objective, e.g. NSE and RMSE, SUFI-2 proposes a routine to find the best point and interval estimation of parameters and corresponding prediction intervals (95 PPU) of time series of interest. To assess the goodness of calibration, final results are presented using two uncertainty measures of p-factor quantifying percentage of observations covered by 95PPU and r-factor quantifying degree of uncertainty, and the analyst has to select the point and interval estimation of parameters which are actually non-dominated regarding both of the uncertainty measures. Based on the described properties of SUFI-2, two important questions are raised, answering of which are our research motivation: Given that in SUFI-2, final selection is based on the two measures or objectives and on the other hand, knowing that there is no multi-objective optimization mechanism in SUFI-2, are the final estimations Pareto-optimal? Can systematic methods be applied to select the final estimations? Dealing with these questions, a new auto-calibration algorithm was proposed where the uncertainty measures were considered as two objectives to find non-dominated interval estimations of parameters by means of coupling Monte Carlo simulation and Multi-Objective Particle Swarm Optimization. Both the proposed algorithm and SUFI-2 were applied to calibrate parameters of water resources planning model of Helleh river basin, Iran. The model is a comprehensive water quantity-quality model developed in the previous researches using WEAP software in order to analyze the impacts of different water resources management strategies including dam construction, increasing cultivation area, utilization of more efficient irrigation technologies, changing crop pattern, etc. Comparing the Pareto frontier resulted from the proposed auto-calibration algorithm with SUFI-2 results, it was revealed that the new algorithm leads to a better and also continuous Pareto frontier, even though it is more computationally expensive. Finally, Nash and Kalai-Smorodinsky bargaining methods were used to choose compromised interval estimation regarding Pareto frontier.

  15. Two-stage crossed beam cooling with ⁶Li and ¹³³Cs atoms in microgravity.

    PubMed

    Luan, Tian; Yao, Hepeng; Wang, Lu; Li, Chen; Yang, Shifeng; Chen, Xuzong; Ma, Zhaoyuan

    2015-05-04

    Applying the direct simulation Monte Carlo (DSMC) method developed for ultracold Bose-Fermi mixture gases research, we study the sympathetic cooling process of 6Li and 133Cs atoms in a crossed optical dipole trap. The obstacles to producing 6Li Fermi degenerate gas via direct sympathetic cooling with 133Cs are also analyzed, by which we find that the side-effect of the gravity is one of the main obstacles. Based on the dynamic nature of 6Li and 133Cs atoms, we suggest a two-stage cooling process with two pairs of crossed beams in microgravity environment. According to our simulations, the temperature of 6Li atoms can be cooled to T = 29.5 pK and T/TF = 0.59 with several thousand atoms, which propose a novel way to get ultracold fermion atoms with quantum degeneracy near pico-Kelvin.

  16. Emergency department injury surveillance and aetiological research: bridging the gap with the two-stage case-control study design.

    PubMed

    Hagel, Brent E

    2011-04-01

    To provide an overview of the two-stage case-control study design and its potential application to ED injury surveillance data and to apply this approach to published ED data on the relation between brain injury and bicycle helmet use. Relevant background is presented on injury aetiology and case-control methodology with extension to the two-stage case-control design in the context of ED injury surveillance. The design is then applied to data from a published case-control study of the relation between brain injury and bicycle helmet use with motor vehicle involvement considered as a potential confounder. Taking into account the additional sampling at the second stage, the adjusted and corrected odds ratio and 95% confidence interval for the brain injury-helmet use relation is presented and compared with the estimate from the entire original dataset. Contexts where the two-stage case-control study design might be most appropriately applied to ED injury surveillance data are suggested. The adjusted odds ratio for the relation between brain injury and bicycle helmet use based on all data (n = 2833) from the original study was 0.34 (95% CI 0.25 to 0.46) compared with an estimate from a two-stage case-control design of 0.35 (95% CI 0.25 to 0.48) using only a fraction of the original subjects (n = 480). Application of the two-stage case-control study design to ED injury surveillance data has the potential to dramatically reduce study time and resource costs with acceptable losses in statistical efficiency.

  17. Interval cancers in a population-based screening program for colorectal cancer in catalonia, Spain.

    PubMed

    Garcia, M; Domènech, X; Vidal, C; Torné, E; Milà, N; Binefa, G; Benito, L; Moreno, V

    2015-01-01

    Objective. To analyze interval cancers among participants in a screening program for colorectal cancer (CRC) during four screening rounds. Methods. The study population consisted of participants of a fecal occult blood test-based screening program from February 2000 to September 2010, with a 30-month follow-up (n = 30,480). We used hospital administration data to identify CRC. An interval cancer was defined as an invasive cancer diagnosed within 30 months of a negative screening result and before the next recommended examination. Gender, age, stage, and site distribution of interval cancers were compared with those in the screen-detected group. Results. Within the study period, 97 tumors were screen-detected and 74 tumors were diagnosed after a negative screening. In addition, 17 CRC (18.3%) were found after an inconclusive result and 2 cases were diagnosed within the surveillance interval (2.1%). There was an increase of interval cancers over the four rounds (from 32.4% to 46.0%). When compared with screen-detected cancers, interval cancers were found predominantly in the rectum (OR: 3.66; 95% CI: 1.51-8.88) and at more advanced stages (P = 0.025). Conclusion. There are large numbers of cancer that are not detected through fecal occult blood test-based screening. The low sensitivity should be emphasized to ensure that individuals with symptoms are not falsely reassured.

  18. Stochastic simulation and analysis of biomolecular reaction networks

    PubMed Central

    Frazier, John M; Chushak, Yaroslav; Foy, Brent

    2009-01-01

    Background In recent years, several stochastic simulation algorithms have been developed to generate Monte Carlo trajectories that describe the time evolution of the behavior of biomolecular reaction networks. However, the effects of various stochastic simulation and data analysis conditions on the observed dynamics of complex biomolecular reaction networks have not recieved much attention. In order to investigate these issues, we employed a a software package developed in out group, called Biomolecular Network Simulator (BNS), to simulate and analyze the behavior of such systems. The behavior of a hypothetical two gene in vitro transcription-translation reaction network is investigated using the Gillespie exact stochastic algorithm to illustrate some of the factors that influence the analysis and interpretation of these data. Results Specific issues affecting the analysis and interpretation of simulation data are investigated, including: (1) the effect of time interval on data presentation and time-weighted averaging of molecule numbers, (2) effect of time averaging interval on reaction rate analysis, (3) effect of number of simulations on precision of model predictions, and (4) implications of stochastic simulations on optimization procedures. Conclusion The two main factors affecting the analysis of stochastic simulations are: (1) the selection of time intervals to compute or average state variables and (2) the number of simulations generated to evaluate the system behavior. PMID:19534796

  19. Selection of the initial design for the two-stage continual reassessment method.

    PubMed

    Jia, Xiaoyu; Ivanova, Anastasia; Lee, Shing M

    2017-01-01

    In the two-stage continual reassessment method (CRM), model-based dose escalation is preceded by a pre-specified escalating sequence starting from the lowest dose level. This is appealing to clinicians because it allows a sufficient number of patients to be assigned to each of the lower dose levels before escalating to higher dose levels. While a theoretical framework to build the two-stage CRM has been proposed, the selection of the initial dose-escalating sequence, generally referred to as the initial design, remains arbitrary, either by specifying cohorts of three patients or by trial and error through extensive simulations. Motivated by a currently ongoing oncology dose-finding study for which clinicians explicitly stated their desire to assign at least one patient to each of the lower dose levels, we proposed a systematic approach for selecting the initial design for the two-stage CRM. The initial design obtained using the proposed algorithm yields better operating characteristics compared to using a cohort of three initial design with a calibrated CRM. The proposed algorithm simplifies the selection of initial design for the two-stage CRM. Moreover, initial designs to be used as reference for planning a two-stage CRM are provided.

  20. Incomplete fuzzy data processing systems using artificial neural network

    NASA Technical Reports Server (NTRS)

    Patyra, Marek J.

    1992-01-01

    In this paper, the implementation of a fuzzy data processing system using an artificial neural network (ANN) is discussed. The binary representation of fuzzy data is assumed, where the universe of discourse is decartelized into n equal intervals. The value of a membership function is represented by a binary number. It is proposed that incomplete fuzzy data processing be performed in two stages. The first stage performs the 'retrieval' of incomplete fuzzy data, and the second stage performs the desired operation on the retrieval data. The method of incomplete fuzzy data retrieval is proposed based on the linear approximation of missing values of the membership function. The ANN implementation of the proposed system is presented. The system was computationally verified and showed a relatively small total error.

  1. Multi-decadal evolution characteristics of global surface temperature anomaly data shown by observation and CMIP5 models

    NASA Astrophysics Data System (ADS)

    Zhu, X.

    2017-12-01

    Based on methods of statistical analysis, the time series of global surface air temperature(SAT) anomalies from 1860-2014 has been defined by three types of phase changes that occur through the division of temperature changes into different stages. The characteristics of the three types of phase changes simulated by CMIP5 models were evaluated. The conclusion is as follows: the SAT from 1860-2014 can be divided into six stages according to trend differences, and this subdivision is proved to be statistically significant. Based on trend analysis and the distribution of slopes between any two points (two points' slope) in every stage, the six stages can be summarized as three phase changes of warming, cooling, and hiatus. Between 1860 and 2014, the world experienced three heating phases (1860-1878, 1909-1942,1975-2004), one cooling phase (1878-1909), and two hiatus phases (1942-1975, 2004-2014).Using the definition method, whether the next year belongs to the previous phase can be estimated. Furthermore, the temperature in 2015 was used as an example to validate the feasibility of this method. The simulations of the heating period by CMIP5 models are well; however the characteristics shown by SAT during the cooling and hiatus period cannot be represented by CMIP5 models. As such, the projections of future heating phases using the CMIP5 models are credible, but for cooling and hiatus events they are unreliable.

  2. Solar thermal upper stage technology demonstrator liquid hydrogen storage and feed system test program

    NASA Astrophysics Data System (ADS)

    Cady, E. C.

    1997-01-01

    The Solar Thermal Upper Stage Technology Demonstrator (STUSTD) Liquid Hydrogen Storage and Feed System (LHSFS) Test Program is described. The test program consists of two principal phases. First, an engineering characterization phase includes tests performed to demonstrate and understand the expected tank performance. This includes fill and drain; baseline heat leak; active Thermodynamic Vent System (TVS); and flow tests. After the LHSFS performance is understood and performance characteristics are determined, a 30 day mission simulation test will be conducted. This test will simulate a 30 day transfer mission from low earth orbit (LEO) to geosynchronous equatorial orbit (GEO). Mission performance predictions, based on the results of the engineering characterization tests, will be used to correlate the results of the 30 day mission simulation.

  3. [Development and evaluation of a small group-based cardiocerebrovascular disease prevention education program for male bus drivers].

    PubMed

    Kim, Eun Young; Hwang, Seon Young

    2012-06-01

    This study was conducted to examine effects of a small group-based cardiocerebrovascular disease (CVD) prevention education program on knowledge, stage of change and health behavior among male bus drivers with CVD risk factors. A non-equivalent control group pretest-posttest design was used. Participants were 68 male bus drivers recruited from two urban bus companies. Participants from the two groups were selected by matching age, education and risk factors. Experimental group (n=34) received a small group-based CVD prevention education program 8 times over 6 weeks and 3 times through telephone interviews at 2-week intervals. Data were collected between December, 2010 and March, 2011, and were analyzed using chi-square test, t-test, and repeated measure analysis of variance with SPSS/Win18.0. Experimental group showed significantly higher scores in CVD prevention knowledge (p<.001) and health behavior (p<.001) at 6 and 12 weeks after intervention. Participants in pre-contemplation and contemplation stages made progress to contemplation and action. This was significantly better at 6 and 12 weeks after intervention (p<.001). Results suggest that small group-based education programs for CVD prevention are effective in increasing knowledge, stage of change, and health behavior to prevent CVD among male bus drivers with CVD risk.

  4. A novel hybrid actuation mechanism based XY nanopositioning stage with totally decoupled kinematics

    NASA Astrophysics Data System (ADS)

    Zhu, Wu-Le; Zhu, Zhiwei; Guo, Ping; Ju, Bing-Feng

    2018-01-01

    This paper reports the design, analysis and testing of a parallel two degree-of-freedom piezo-actuated compliant stage for XY nanopositioning by introducing an innovative hybrid actuation mechanism. It mainly features the combination of two Scott-Russell and a half-bridge mechanisms for double-stage displacement amplification as well as moving direction modulation. By adopting the leaf-type double parallelogram (LTDP) structures at both input and output ends of the hybrid mechanism, the lateral stiffness and dynamic characteristics are significantly improved while the parasitic motions are greatly eliminated. The XY nanopositioning stage is constructed with two orthogonally configured hybrid mechanisms along with the LTDP mechanisms for totally decoupled kinematics at both input and output ends. An analytical model was established to describe the complete elastic deformation behavior of the stage, with further verification through the finite element simulation. Finally, experiments were implemented to comprehensively evaluate both the static and dynamic performances of the proposed stage. Closed-loop control of the piezoelectric actuators (PEA) by integrating strain gauges was also conducted to effectively eliminate the nonlinear hysteresis of the stage.

  5. A Novel Finite-Sum Inequality-Based Method for Robust H∞ Control of Uncertain Discrete-Time Takagi-Sugeno Fuzzy Systems With Interval-Like Time-Varying Delays.

    PubMed

    Zhang, Xian-Ming; Han, Qing-Long; Ge, Xiaohua

    2017-09-22

    This paper is concerned with the problem of robust H∞ control of an uncertain discrete-time Takagi-Sugeno fuzzy system with an interval-like time-varying delay. A novel finite-sum inequality-based method is proposed to provide a tighter estimation on the forward difference of certain Lyapunov functional, leading to a less conservative result. First, an auxiliary vector function is used to establish two finite-sum inequalities, which can produce tighter bounds for the finite-sum terms appearing in the forward difference of the Lyapunov functional. Second, a matrix-based quadratic convex approach is employed to equivalently convert the original matrix inequality including a quadratic polynomial on the time-varying delay into two boundary matrix inequalities, which delivers a less conservative bounded real lemma (BRL) for the resultant closed-loop system. Third, based on the BRL, a novel sufficient condition on the existence of suitable robust H∞ fuzzy controllers is derived. Finally, two numerical examples and a computer-simulated truck-trailer system are provided to show the effectiveness of the obtained results.

  6. Estimating degradation in real time and accelerated stability tests with random lot-to-lot variation: a simulation study.

    PubMed

    Magari, Robert T

    2002-03-01

    The effect of different lot-to-lot variability levels on the prediction of stability are studied based on two statistical models for estimating degradation in real time and accelerated stability tests. Lot-to-lot variability is considered as random in both models, and is attributed to two sources-variability at time zero, and variability of degradation rate. Real-time stability tests are modeled as a function of time while accelerated stability tests as a function of time and temperatures. Several data sets were simulated, and a maximum likelihood approach was used for estimation. The 95% confidence intervals for the degradation rate depend on the amount of lot-to-lot variability. When lot-to-lot degradation rate variability is relatively large (CV > or = 8%) the estimated confidence intervals do not represent the trend for individual lots. In such cases it is recommended to analyze each lot individually. Copyright 2002 Wiley-Liss, Inc. and the American Pharmaceutical Association J Pharm Sci 91: 893-899, 2002

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Xiaoyao; Hall, Randall W.; Department of Chemistry, Louisiana State University, Baton Rouge, Louisiana 70803

    The Sign Learning Kink (SiLK) based Quantum Monte Carlo (QMC) method is used to calculate the ab initio ground state energies for multiple geometries of the H{sub 2}O, N{sub 2}, and F{sub 2} molecules. The method is based on Feynman’s path integral formulation of quantum mechanics and has two stages. The first stage is called the learning stage and reduces the well-known QMC minus sign problem by optimizing the linear combinations of Slater determinants which are used in the second stage, a conventional QMC simulation. The method is tested using different vector spaces and compared to the results of othermore » quantum chemical methods and to exact diagonalization. Our findings demonstrate that the SiLK method is accurate and reduces or eliminates the minus sign problem.« less

  8. Computer simulation of heterogeneous polymer photovoltaic devices

    NASA Astrophysics Data System (ADS)

    Kodali, Hari K.; Ganapathysubramanian, Baskar

    2012-04-01

    Polymer-based photovoltaic devices have the potential for widespread usage due to their low cost per watt and mechanical flexibility. Efficiencies close to 9.0% have been achieved recently in conjugated polymer based organic solar cells (OSCs). These devices were fabricated using solvent-based processing of electron-donating and electron-accepting materials into the so-called bulk heterojunction (BHJ) architecture. Experimental evidence suggests that a key property determining the power-conversion efficiency of such devices is the final morphological distribution of the donor and acceptor constituents. In order to understand the role of morphology on device performance, we develop a scalable computational framework that efficiently interrogates OSCs to investigate relationships between the morphology at the nano-scale with the device performance. In this work, we extend the Buxton and Clarke model (2007 Modelling Simul. Mater. Sci. Eng. 15 13-26) to simulate realistic devices with complex active layer morphologies using a dimensionally independent, scalable, finite-element method. We incorporate all stages involved in current generation, namely (1) exciton generation and diffusion, (2) charge generation and (3) charge transport in a modular fashion. The numerical challenges encountered during interrogation of realistic microstructures are detailed. We compare each stage of the photovoltaic process for two microstructures: a BHJ morphology and an idealized sawtooth morphology. The results are presented for both two- and three-dimensional structures.

  9. Periodicity in extinction and the problem of catastrophism in the history of life

    NASA Technical Reports Server (NTRS)

    Sepkoski, J. J. Jr; Sepkoski JJ, J. r. (Principal Investigator)

    1989-01-01

    The hypothesis that extinction events have recurred periodically over the last quarter billion years is greatly strengthened by new data on the stratigraphic ranges of marine animal genera. In the interval from the Permian to Recent, these data encompass some 13,000 generic extinctions, providing a more sensitive indicator of species-level extinctions than previously used familial data. Extinction time series computed from the generic data display nine strong peaks that are nearly uniformly spaced at 26 Ma intervals over the last 270 Ma. Most of these peaks correspond to extinction events recognized in more detailed, if limited, biostratigraphic studies. These new data weaken or negate most arguments against periodicity, which have involved criticisms of the taxonomic data base, sampling intervals, chronometric time scales, and statistical methods used in previous analyses. The criticisms are reviewed in some detail and various new calculations and simulations, including one assessing the effects of paraphyletic taxa, are presented. Although the new data strengthen the case for periodicity, they offer little new insight into the deriving mechanism behind the pattern. However, they do suggest that many of the periodic events may not have been catastrophic, occurring instead over several stratigraphic stages or substages.

  10. Interleukin-1β gene variants are associated with QTc interval prolongation following cardiac surgery: a prospective observational study

    PubMed Central

    Kertai, Miklos D.; Ji, Yunqi; Li, Yi-Ju; Mathew, Joseph P.; Daubert, James P.; Podgoreanu, Mihai V.

    2016-01-01

    Background We characterized cardiac surgery-induced dynamic changes of the corrected QT (QTc) interval and tested the hypothesis that genetic factors are associated with perioperative QTc prolongation independent of clinical and procedural factors. Methods All study subjects were ascertained from a prospective study of patients who underwent elective cardiac surgery during August 1999 to April 2002. We defined a prolonged QTc interval as >440 msec, measured from 24-hr pre- and postoperative 12-lead electrocardiograms. The association of 37 single nucleotide polymorphisms (SNPs) in 21 candidate genes – involved in modulating arrhythmia susceptibility pathways with postoperative QTc changes–was investigated in a two-stage design with a stage I cohort (n = 497) nested within a stage II cohort (n = 957). Empirical P values (Pemp) were obtained by permutation tests with 10,000 repeats. Results After adjusting for clinical and procedural risk factors, we selected four SNPs (P value range, 0.03-0.1) in stage I, which we then tested in the stage II cohort. Two functional SNPs in the pro-inflammatory cytokine interleukin-1β (IL1β), rs1143633 (odds ratio [OR], 0.71; 95% confidence interval [CI], 0.53 to 0.95; Pemp = 0.02) and rs16944 (OR, 1.31; 95% CI, 1.01 to 1.70; Pemp = 0.04), remained independent predictors of postoperative QTc prolongation. The ability of a clinico-genetic model incorporating the two IL1B polymorphisms to classify patients at risk for developing prolonged postoperative QTc was superior to a clinical model alone, with a net reclassification improvement of 0.308 (P = 0.0003) and an integrated discrimination improvement of 0.02 (P = 0.000024). Conclusion The results suggest a contribution of IL1β in modulating susceptibility to postoperative QTc prolongation after cardiac surgery. PMID:26858093

  11. Assessing uncertainties in crop and pasture ensemble model simulations of productivity and N2 O emissions.

    PubMed

    Ehrhardt, Fiona; Soussana, Jean-François; Bellocchi, Gianni; Grace, Peter; McAuliffe, Russel; Recous, Sylvie; Sándor, Renáta; Smith, Pete; Snow, Val; de Antoni Migliorati, Massimiliano; Basso, Bruno; Bhatia, Arti; Brilli, Lorenzo; Doltra, Jordi; Dorich, Christopher D; Doro, Luca; Fitton, Nuala; Giacomini, Sandro J; Grant, Brian; Harrison, Matthew T; Jones, Stephanie K; Kirschbaum, Miko U F; Klumpp, Katja; Laville, Patricia; Léonard, Joël; Liebig, Mark; Lieffering, Mark; Martin, Raphaël; Massad, Raia S; Meier, Elizabeth; Merbold, Lutz; Moore, Andrew D; Myrgiotis, Vasileios; Newton, Paul; Pattey, Elizabeth; Rolinski, Susanne; Sharp, Joanna; Smith, Ward N; Wu, Lianhai; Zhang, Qing

    2018-02-01

    Simulation models are extensively used to predict agricultural productivity and greenhouse gas emissions. However, the uncertainties of (reduced) model ensemble simulations have not been assessed systematically for variables affecting food security and climate change mitigation, within multi-species agricultural contexts. We report an international model comparison and benchmarking exercise, showing the potential of multi-model ensembles to predict productivity and nitrous oxide (N 2 O) emissions for wheat, maize, rice and temperate grasslands. Using a multi-stage modelling protocol, from blind simulations (stage 1) to partial (stages 2-4) and full calibration (stage 5), 24 process-based biogeochemical models were assessed individually or as an ensemble against long-term experimental data from four temperate grassland and five arable crop rotation sites spanning four continents. Comparisons were performed by reference to the experimental uncertainties of observed yields and N 2 O emissions. Results showed that across sites and crop/grassland types, 23%-40% of the uncalibrated individual models were within two standard deviations (SD) of observed yields, while 42 (rice) to 96% (grasslands) of the models were within 1 SD of observed N 2 O emissions. At stage 1, ensembles formed by the three lowest prediction model errors predicted both yields and N 2 O emissions within experimental uncertainties for 44% and 33% of the crop and grassland growth cycles, respectively. Partial model calibration (stages 2-4) markedly reduced prediction errors of the full model ensemble E-median for crop grain yields (from 36% at stage 1 down to 4% on average) and grassland productivity (from 44% to 27%) and to a lesser and more variable extent for N 2 O emissions. Yield-scaled N 2 O emissions (N 2 O emissions divided by crop yields) were ranked accurately by three-model ensembles across crop species and field sites. The potential of using process-based model ensembles to predict jointly productivity and N 2 O emissions at field scale is discussed. © 2017 John Wiley & Sons Ltd.

  12. Estimation of TOA based MUSIC algorithm and cross correlation algorithm of appropriate interval

    NASA Astrophysics Data System (ADS)

    Lin, Wei; Liu, Jun; Zhou, Yineng; Huang, Jiyan

    2017-03-01

    Localization of mobile station (MS) has now gained considerable attention due to its wide applications in military, environmental, health and commercial systems. Phrase angle and encode data of MSK system model are two critical parameters in time-of-arrival (TOA) localization technique; nevertheless, precise value of phrase angle and encode data are not easy to achieved in general. In order to meet the actual situation, we should consider the condition that phase angle and encode data is unknown. In this paper, a novel TOA localization method, which combine MUSIC algorithm and cross correlation algorithm in an appropriate interval, is proposed. Simulations show that the proposed method has better performance than music algorithm and cross correlation algorithm of the whole interval.

  13. Simulation of energy spectrum of GEM detector from an x-ray quantum

    NASA Astrophysics Data System (ADS)

    Malinowski, K.; Chernyshova, M.; Czarski, T.; Kowalska-Strzęciwilk, E.; Linczuk, P.; Wojeński, A.; Krawczyk, R.; Gąska, M.

    2018-01-01

    This paper presents the results of the energy resolution simulation for the triple GEM-based detector for x-ray quantum of 5.9 keV . Photons of this energy are emitted by 55Fe source, which is a standard calibration marker for this type of detectors. The calculations were made in Garfield++ in two stages. In the first stage, the distribution of the amount of primary electrons generated in the drift volume by the x-ray quantum was simulated using the Heed program. Secondly, the primary electrons of the resulting quantitative distribution were treated as a source of electron avalanches propagated through the whole volume of the triple GEM-based detector. The distribution of the obtained signals created a spectrum corresponding to the peak at 5.9 keV, which allowed us to determine the theoretical energy resolution of the detector. Its knowledge allows observing and improving the eventual experimental deterioration of the energy resolution, inevitably accompanying processes of registration and processing of the signals.

  14. An enhanced export coefficient based optimization model for supporting agricultural nonpoint source pollution mitigation under uncertainty.

    PubMed

    Rong, Qiangqiang; Cai, Yanpeng; Chen, Bing; Yue, Wencong; Yin, Xin'an; Tan, Qian

    2017-02-15

    In this research, an export coefficient based dual inexact two-stage stochastic credibility constrained programming (ECDITSCCP) model was developed through integrating an improved export coefficient model (ECM), interval linear programming (ILP), fuzzy credibility constrained programming (FCCP) and a fuzzy expected value equation within a general two stage programming (TSP) framework. The proposed ECDITSCCP model can effectively address multiple uncertainties expressed as random variables, fuzzy numbers, pure and dual intervals. Also, the model can provide a direct linkage between pre-regulated management policies and the associated economic implications. Moreover, the solutions under multiple credibility levels can be obtained for providing potential decision alternatives for decision makers. The proposed model was then applied to identify optimal land use structures for agricultural NPS pollution mitigation in a representative upstream subcatchment of the Miyun Reservoir watershed in north China. Optimal solutions of the model were successfully obtained, indicating desired land use patterns and nutrient discharge schemes to get a maximum agricultural system benefits under a limited discharge permit. Also, numerous results under multiple credibility levels could provide policy makers with several options, which could help get an appropriate balance between system benefits and pollution mitigation. The developed ECDITSCCP model can be effectively applied to addressing the uncertain information in agricultural systems and shows great applicability to the land use adjustment for agricultural NPS pollution mitigation. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Inhibitor design strategy based on an enzyme structural flexibility: a case of bacterial MurD ligase.

    PubMed

    Perdih, Andrej; Hrast, Martina; Barreteau, Hélène; Gobec, Stanislav; Wolber, Gerhard; Solmajer, Tom

    2014-05-27

    Increasing bacterial resistance to available antibiotics stimulated the discovery of novel efficacious antibacterial agents. The biosynthesis of the bacterial peptidoglycan, where the MurD enzyme is involved in the intracellular phase of the UDP-MurNAc-pentapeptide formation, represents a collection of highly selective targets for novel antibacterial drug design. In our previous computational studies, the C-terminal domain motion of the MurD ligase was investigated using Targeted Molecular Dynamic (TMD) simulation and the Off-Path Simulation (OPS) technique. In this study, we present a drug design strategy using multiple protein structures for the identification of novel MurD ligase inhibitors. Our main focus was the ATP-binding site of the MurD enzyme. In the first stage, three MurD protein conformations were selected based on the obtained OPS/TMD data as the initial criterion. Subsequently, a two-stage virtual screening approach was utilized combining derived structure-based pharmacophores with molecular docking calculations. Selected compounds were then assayed in the established enzyme binding assays, and compound 3 from the aminothiazole class was discovered to act as a dual MurC/MurD inhibitor in the micomolar range. A steady-state kinetic study was performed on the MurD enzyme to provide further information about the mechanistic aspects of its inhibition. In the final stage, all used conformations of the MurD enzyme with compound 3 were simulated in classical molecular dynamics (MD) simulations providing atomistic insights of the experimental results. Overall, the study depicts several challenges that need to be addressed when trying to hit a flexible moving target such as the presently studied bacterial MurD enzyme and show the possibilities of how computational tools can be proficiently used at all stages of the drug discovery process.

  16. Modeling screening, prevention, and delaying of Alzheimer's disease: an early-stage decision analytic model

    PubMed Central

    2010-01-01

    Background Alzheimer's Disease (AD) affects a growing proportion of the population each year. Novel therapies on the horizon may slow the progress of AD symptoms and avoid cases altogether. Initiating treatment for the underlying pathology of AD would ideally be based on biomarker screening tools identifying pre-symptomatic individuals. Early-stage modeling provides estimates of potential outcomes and informs policy development. Methods A time-to-event (TTE) simulation provided estimates of screening asymptomatic patients in the general population age ≥55 and treatment impact on the number of patients reaching AD. Patients were followed from AD screen until all-cause death. Baseline sensitivity and specificity were 0.87 and 0.78, with treatment on positive screen. Treatment slowed progression by 50%. Events were scheduled using literature-based age-dependent incidences of AD and death. Results The base case results indicated increased AD free years (AD-FYs) through delays in onset and a reduction of 20 AD cases per 1000 screened individuals. Patients completely avoiding AD accounted for 61% of the incremental AD-FYs gained. Total years of treatment per 1000 screened patients was 2,611. The number-needed-to-screen was 51 and the number-needed-to-treat was 12 to avoid one case of AD. One-way sensitivity analysis indicated that duration of screening sensitivity and rescreen interval impact AD-FYs the most. A two-way sensitivity analysis found that for a test with an extended duration of sensitivity (15 years) the number of AD cases avoided was 6,000-7,000 cases for a test with higher sensitivity and specificity (0.90,0.90). Conclusions This study yielded valuable parameter range estimates at an early stage in the study of screening for AD. Analysis identified duration of screening sensitivity as a key variable that may be unavailable from clinical trials. PMID:20433705

  17. Modeling screening, prevention, and delaying of Alzheimer's disease: an early-stage decision analytic model.

    PubMed

    Furiak, Nicolas M; Klein, Robert W; Kahle-Wrobleski, Kristin; Siemers, Eric R; Sarpong, Eric; Klein, Timothy M

    2010-04-30

    Alzheimer's Disease (AD) affects a growing proportion of the population each year. Novel therapies on the horizon may slow the progress of AD symptoms and avoid cases altogether. Initiating treatment for the underlying pathology of AD would ideally be based on biomarker screening tools identifying pre-symptomatic individuals. Early-stage modeling provides estimates of potential outcomes and informs policy development. A time-to-event (TTE) simulation provided estimates of screening asymptomatic patients in the general population age > or =55 and treatment impact on the number of patients reaching AD. Patients were followed from AD screen until all-cause death. Baseline sensitivity and specificity were 0.87 and 0.78, with treatment on positive screen. Treatment slowed progression by 50%. Events were scheduled using literature-based age-dependent incidences of AD and death. The base case results indicated increased AD free years (AD-FYs) through delays in onset and a reduction of 20 AD cases per 1000 screened individuals. Patients completely avoiding AD accounted for 61% of the incremental AD-FYs gained. Total years of treatment per 1000 screened patients was 2,611. The number-needed-to-screen was 51 and the number-needed-to-treat was 12 to avoid one case of AD. One-way sensitivity analysis indicated that duration of screening sensitivity and rescreen interval impact AD-FYs the most. A two-way sensitivity analysis found that for a test with an extended duration of sensitivity (15 years) the number of AD cases avoided was 6,000-7,000 cases for a test with higher sensitivity and specificity (0.90,0.90). This study yielded valuable parameter range estimates at an early stage in the study of screening for AD. Analysis identified duration of screening sensitivity as a key variable that may be unavailable from clinical trials.

  18. Conventional 3D staging PET/CT in CT simulation for lung cancer: impact of rigid and deformable target volume alignments for radiotherapy treatment planning.

    PubMed

    Hanna, G G; Van Sörnsen De Koste, J R; Carson, K J; O'Sullivan, J M; Hounsell, A R; Senan, S

    2011-10-01

    Positron emission tomography (PET)/CT scans can improve target definition in radiotherapy for non-small cell lung cancer (NSCLC). As staging PET/CT scans are increasingly available, we evaluated different methods for co-registration of staging PET/CT data to radiotherapy simulation (RTP) scans. 10 patients underwent staging PET/CT followed by RTP PET/CT. On both scans, gross tumour volumes (GTVs) were delineated using CT (GTV(CT)) and PET display settings. Four PET-based contours (manual delineation, two threshold methods and a source-to-background ratio method) were delineated. The CT component of the staging scan was co-registered using both rigid and deformable techniques to the CT component of RTP PET/CT. Subsequently rigid registration and deformation warps were used to transfer PET and CT contours from the staging scan to the RTP scan. Dice's similarity coefficient (DSC) was used to assess the registration accuracy of staging-based GTVs following both registration methods with the GTVs delineated on the RTP PET/CT scan. When the GTV(CT) delineated on the staging scan after both rigid registration and deformation was compared with the GTV(CT)on the RTP scan, a significant improvement in overlap (registration) using deformation was observed (mean DSC 0.66 for rigid registration and 0.82 for deformable registration, p = 0.008). A similar comparison for PET contours revealed no significant improvement in overlap with the use of deformable registration. No consistent improvements in similarity measures were observed when deformable registration was used for transferring PET-based contours from a staging PET/CT. This suggests that currently the use of rigid registration remains the most appropriate method for RTP in NSCLC.

  19. A hierarchical fire frequency model to simulate temporal patterns of fire regimes in LANDIS

    Treesearch

    Jian Yang; Hong S. He; Eric J. Gustafson

    2004-01-01

    Fire disturbance has important ecological effects in many forest landscapes. Existing statistically based approaches can be used to examine the effects of a fire regime on forest landscape dynamics. Most examples of statistically based fire models divide a fire occurrence into two stages--fire ignition and fire initiation. However, the exponential and Weibull fire-...

  20. Flood-inundation maps for the Withlacoochee River From Skipper Bridge Road to St. Augustine Road, within the City of Valdosta, Georgia, and Lowndes County, Georgia

    USGS Publications Warehouse

    Musser, Jonathan W.

    2018-01-31

    Digital flood-inundation maps for a 12.6-mile reach of the Withlacoochee River from Skipper Bridge Road to St. Augustine Road (Georgia State Route 133) were developed to depict estimates of the areal extent and depth of flooding corresponding to selected water levels (stages) at the U.S. Geological Survey (USGS) streamgage at Withlacoochee River at Skipper Bridge Road, near Bemiss, Ga. (023177483). Real-time stage information from this streamgage can be used with these maps to estimate near real-time areas of inundation. The forecasted peak-stage information for the USGS streamgage at Withlacoochee River at Skipper Bridge Road, near Bemiss, Ga. (023177483), can be used in conjunction with the maps developed for this study to show predicted areas of flood inundation.A one-dimensional step-backwater model was developed using the U.S. Army Corps of Engineers Hydrologic Engineer-ing Center’s River Analysis System (HEC–RAS) software for the Withlacoochee River and was used to compute flood profiles for a 12.6-mile reach of the Withlacoochee River. The hydraulic model was then used to simulate 23 water-surface profiles at 1.0-foot (ft) intervals at the Withlacoochee River near the Bemiss streamgage. The profiles ranged from the National Weather Service action stage of 10.7 ft, which is 131.0 ft above the North American Vertical Datum of 1988 (NAVD 88), to a stage of 32.7 ft, which is 153.0 ft above NAVD 88. The simulated water-surface profiles were then combined with a geographic information system digital elevation model—derived from light detection and ranging (lidar) data having a 4.0-ft horizontal resolution—to delineate the area flooded at each 1.0-ft interval of stream stage.

  1. A Two-Stage Method to Determine Optimal Product Sampling considering Dynamic Potential Market

    PubMed Central

    Hu, Zhineng; Lu, Wei; Han, Bing

    2015-01-01

    This paper develops an optimization model for the diffusion effects of free samples under dynamic changes in potential market based on the characteristics of independent product and presents a two-stage method to figure out the sampling level. The impact analysis of the key factors on the sampling level shows that the increase of the external coefficient or internal coefficient has a negative influence on the sampling level. And the changing rate of the potential market has no significant influence on the sampling level whereas the repeat purchase has a positive one. Using logistic analysis and regression analysis, the global sensitivity analysis gives a whole analysis of the interaction of all parameters, which provides a two-stage method to estimate the impact of the relevant parameters in the case of inaccuracy of the parameters and to be able to construct a 95% confidence interval for the predicted sampling level. Finally, the paper provides the operational steps to improve the accuracy of the parameter estimation and an innovational way to estimate the sampling level. PMID:25821847

  2. Continuous EEG signal analysis for asynchronous BCI application.

    PubMed

    Hsu, Wei-Yen

    2011-08-01

    In this study, we propose a two-stage recognition system for continuous analysis of electroencephalogram (EEG) signals. An independent component analysis (ICA) and correlation coefficient are used to automatically eliminate the electrooculography (EOG) artifacts. Based on the continuous wavelet transform (CWT) and Student's two-sample t-statistics, active segment selection then detects the location of active segment in the time-frequency domain. Next, multiresolution fractal feature vectors (MFFVs) are extracted with the proposed modified fractal dimension from wavelet data. Finally, the support vector machine (SVM) is adopted for the robust classification of MFFVs. The EEG signals are continuously analyzed in 1-s segments, and every 0.5 second moves forward to simulate asynchronous BCI works in the two-stage recognition architecture. The segment is first recognized as lifted or not in the first stage, and then is classified as left or right finger lifting at stage two if the segment is recognized as lifting in the first stage. Several statistical analyses are used to evaluate the performance of the proposed system. The results indicate that it is a promising system in the applications of asynchronous BCI work.

  3. Effects of yeast extract and vitamin D on turkey mortality and cellulitis incidence in a transport stress model.

    USDA-ARS?s Scientific Manuscript database

    We evaluated yeast extract (YE) and vitamin D (VD) in turkeys treated with dexamethasone (Dex) at intervals designed to simulate transport stress during a 3 stage growout. YE but not VD decreased early mortality (P = 0.001) and mortality at wk 7 (P= 0.02) and wk 12 (P = 0.002) but not wk 16. Celluli...

  4. Combining evidence from multiple electronic health care databases: performances of one-stage and two-stage meta-analysis in matched case-control studies.

    PubMed

    La Gamba, Fabiola; Corrao, Giovanni; Romio, Silvana; Sturkenboom, Miriam; Trifirò, Gianluca; Schink, Tania; de Ridder, Maria

    2017-10-01

    Clustering of patients in databases is usually ignored in one-stage meta-analysis of multi-database studies using matched case-control data. The aim of this study was to compare bias and efficiency of such a one-stage meta-analysis with a two-stage meta-analysis. First, we compared the approaches by generating matched case-control data under 5 simulated scenarios, built by varying: (1) the exposure-outcome association; (2) its variability among databases; (3) the confounding strength of one covariate on this association; (4) its variability; and (5) the (heterogeneous) confounding strength of two covariates. Second, we made the same comparison using empirical data from the ARITMO project, a multiple database study investigating the risk of ventricular arrhythmia following the use of medications with arrhythmogenic potential. In our study, we specifically investigated the effect of current use of promethazine. Bias increased for one-stage meta-analysis with increasing (1) between-database variance of exposure effect and (2) heterogeneous confounding generated by two covariates. The efficiency of one-stage meta-analysis was slightly lower than that of two-stage meta-analysis for the majority of investigated scenarios. Based on ARITMO data, there were no evident differences between one-stage (OR = 1.50, CI = [1.08; 2.08]) and two-stage (OR = 1.55, CI = [1.12; 2.16]) approaches. When the effect of interest is heterogeneous, a one-stage meta-analysis ignoring clustering gives biased estimates. Two-stage meta-analysis generates estimates at least as accurate and precise as one-stage meta-analysis. However, in a study using small databases and rare exposures and/or outcomes, a correct one-stage meta-analysis becomes essential. Copyright © 2017 John Wiley & Sons, Ltd.

  5. Compact high-flux two-stage solar collectors based on tailored edge-ray concentrators

    NASA Astrophysics Data System (ADS)

    Friedman, Robert P.; Gordon, Jeffrey M.; Ries, Harald

    1995-08-01

    Using the recently-invented tailored edge-ray concentrator (TERC) approach for the design of compact two-stage high-flux solar collectors--a focusing primary reflector and a nonimaging TERC secondary reflector--we present: 1) a new primary reflector shape based on the TERC approach and a secondary TERC tailored to its particular flux map, such that more compact concentrators emerge at flux concentration levels in excess of 90% of the thermodynamic limit; and 2) calculations and raytrace simulations result which demonstrate the V-cone approximations to a wide variety of TERCs attain the concentration of the TERC to within a few percent, and hence represent practical secondary concentrators that may be superior to corresponding compound parabolic concentrator or trumpet secondaries.

  6. Real waiting times for surgery. Proposal for an improved system for their management.

    PubMed

    Abásolo, Ignacio; Barber, Patricia; González López-Valcárcel, Beatriz; Jiménez, Octavio

    2014-01-01

    In Spain, official information on waiting times for surgery is based on the interval between the indication for surgery and its performance. We aimed to estimate total waiting times for surgical procedures, including outpatient visits and diagnostic tests prior to surgery. In addition, we propose an alternative system to manage total waiting times that reduces variability and maximum waiting times without increasing the use of health care resources. This system is illustrated by three surgical procedures: cholecystectomy, carpal tunnel release and inguinal/femoral hernia repair. Using data from two Autonomous Communities, we adjusted, through simulation, a theoretical distribution of the total waiting time assuming independence of the waiting times of each stage of the clinical procedure. We show an alternative system in which the waiting time for the second consultation is established according to the time previously waited for the first consultation. Average total waiting times for cholecystectomy, carpal tunnel release and inguinal/femoral hernia repair were 331, 355 and 137 days, respectively (official data are 83, 68 and 73 days, respectively). Using different negative correlations between waiting times for subsequent consultations would reduce maximum waiting times by between 2% and 15% and substantially reduce heterogeneity among patients, without generating higher resource use. Total waiting times are between two and five times higher than those officially published. The relationship between the waiting times at each stage of the medical procedure may be used to decrease variability and maximum waiting times. Copyright © 2013 SESPAS. Published by Elsevier Espana. All rights reserved.

  7. Depth extraction method with high accuracy in integral imaging based on moving array lenslet technique

    NASA Astrophysics Data System (ADS)

    Wang, Yao-yao; Zhang, Juan; Zhao, Xue-wei; Song, Li-pei; Zhang, Bo; Zhao, Xing

    2018-03-01

    In order to improve depth extraction accuracy, a method using moving array lenslet technique (MALT) in pickup stage is proposed, which can decrease the depth interval caused by pixelation. In this method, the lenslet array is moved along the horizontal and vertical directions simultaneously for N times in a pitch to get N sets of elemental images. Computational integral imaging reconstruction method for MALT is taken to obtain the slice images of the 3D scene, and the sum modulus (SMD) blur metric is taken on these slice images to achieve the depth information of the 3D scene. Simulation and optical experiments are carried out to verify the feasibility of this method.

  8. Montessori-based activities among persons with late-stage dementia: Evaluation of mental and behavioral health outcomes.

    PubMed

    Wilks, Scott E; Boyd, P August; Bates, Samantha M; Cain, Daphne S; Geiger, Jennifer R

    2017-01-01

    Objectives Literature regarding Montessori-based activities with older adults with dementia is fairly common with early stages of dementia. Conversely, research on said activities with individuals experiencing late-stage dementia is limited because of logistical difficulties in sampling and data collection. Given the need to understand risks and benefits of treatments for individuals with late-stage dementia, specifically regarding their mental and behavioral health, this study sought to evaluate the effects of a Montessori-based activity program implemented in a long-term care facility. Method Utilizing an interrupted time series design, trained staff completed observation-based measures for 43 residents with late-stage dementia at three intervals over six months. Empirical measures assessed mental health (anxiety, psychological well-being, quality of life) and behavioral health (problem behaviors, social engagement, capacity for activities of daily living). Results Group differences were observed via repeated measures ANOVA and paired-samples t-tests. The aggregate, longitudinal results-from baseline to final data interval-for the psychological and behavioral health measures were as follows: problem behaviors diminished though not significantly; social engagement decreased significantly; capacities for activities of daily living decreased significantly; quality of life increased slightly but not significantly; anxiety decreased slightly but not significantly; and psychological well-being significantly decreased. Conclusion Improvements observed for quality of life and problem behaviors may yield promise for Montessori-based activities and related health care practices. The rapid physiological and cognitive deterioration from late-stage dementia should be considered when interpreting these results.

  9. Laminar and Turbulent Dynamos in Chiral Magnetohydrodynamics. II. Simulations

    NASA Astrophysics Data System (ADS)

    Schober, Jennifer; Rogachevskii, Igor; Brandenburg, Axel; Boyarsky, Alexey; Fröhlich, Jürg; Ruchayskiy, Oleg; Kleeorin, Nathan

    2018-05-01

    Using direct numerical simulations (DNS), we study laminar and turbulent dynamos in chiral magnetohydrodynamics with an extended set of equations that accounts for an additional contribution to the electric current due to the chiral magnetic effect (CME). This quantum phenomenon originates from an asymmetry between left- and right-handed relativistic fermions in the presence of a magnetic field and gives rise to a chiral dynamo. We show that the magnetic field evolution proceeds in three stages: (1) a small-scale chiral dynamo instability, (2) production of chiral magnetically driven turbulence and excitation of a large-scale dynamo instability due to a new chiral effect (α μ effect), and (3) saturation of magnetic helicity and magnetic field growth controlled by a conservation law for the total chirality. The α μ effect becomes dominant at large fluid and magnetic Reynolds numbers and is not related to kinetic helicity. The growth rate of the large-scale magnetic field and its characteristic scale measured in the numerical simulations agree well with theoretical predictions based on mean-field theory. The previously discussed two-stage chiral magnetic scenario did not include stage (2), during which the characteristic scale of magnetic field variations can increase by many orders of magnitude. Based on the findings from numerical simulations, the relevance of the CME and the chiral effects revealed in the relativistic plasma of the early universe and of proto-neutron stars are discussed.

  10. Early stages of the recovery stroke in myosin II studied by molecular dynamics simulations

    PubMed Central

    Baumketner, Andrij; Nesmelov, Yuri

    2011-01-01

    The recovery stroke is a key step in the functional cycle of muscle motor protein myosin, during which pre-recovery conformation of the protein is changed into the active post-recovery conformation, ready to exersice force. We study the microscopic details of this transition using molecular dynamics simulations of atomistic models in implicit and explicit solvent. In more than 2 μs of aggregate simulation time, we uncover evidence that the recovery stroke is a two-step process consisting of two stages separated by a time delay. In our simulations, we directly observe the first stage at which switch II loop closes in the presence of adenosine triphosphate at the nucleotide binding site. The resulting configuration of the nucleotide binding site is identical to that detected experimentally. Distribution of inter-residue distances measured in the force generating region of myosin is in good agreement with the experimental data. The second stage of the recovery stroke structural transition, rotation of the converter domain, was not observed in our simulations. Apparently it occurs on a longer time scale. We suggest that the two parts of the recovery stroke need to be studied using separate computational models. PMID:21922589

  11. Test Equality between Three Treatments under an Incomplete Block Crossover Design.

    PubMed

    Lui, Kung-Jong

    2015-01-01

    Under a random effects linear additive risk model, we compare two experimental treatments with a placebo in continuous data under an incomplete block crossover trial. We develop three test procedures for simultaneously testing equality between two experimental treatments and a placebo, as well as interval estimators for the mean difference between treatments. We apply Monte Carlo simulations to evaluate the performance of these test procedures and interval estimators in a variety of situations. We note that the bivariate test procedure accounting for the dependence structure based on the F-test is preferable to the other two procedures when there is only one of the two experimental treatments has a non-zero effect vs. the placebo. We note further that when the effects of the two experimental treatments vs. a placebo are in the same relative directions and are approximately of equal magnitude, the summary test procedure based on a simple average of two weighted-least-squares (WLS) estimators can outperform the other two procedures with respect to power. When one of the two experimental treatments has a relatively large effect vs. the placebo, the univariate test procedure with using Bonferroni's equality can be still of use. Finally, we use the data about the forced expiratory volume in 1 s (FEV1) readings taken from a double-blind crossover trial comparing two different doses of formoterol with a placebo to illustrate the use of test procedures and interval estimators proposed here.

  12. Application of Adaptive Design Methodology in Development of a Long-Acting Glucagon-Like Peptide-1 Analog (Dulaglutide): Statistical Design and Simulations

    PubMed Central

    Skrivanek, Zachary; Berry, Scott; Berry, Don; Chien, Jenny; Geiger, Mary Jane; Anderson, James H.; Gaydos, Brenda

    2012-01-01

    Background Dulaglutide (dula, LY2189265), a long-acting glucagon-like peptide-1 analog, is being developed to treat type 2 diabetes mellitus. Methods To foster the development of dula, we designed a two-stage adaptive, dose-finding, inferentially seamless phase 2/3 study. The Bayesian theoretical framework is used to adaptively randomize patients in stage 1 to 7 dula doses and, at the decision point, to either stop for futility or to select up to 2 dula doses for stage 2. After dose selection, patients continue to be randomized to the selected dula doses or comparator arms. Data from patients assigned the selected doses will be pooled across both stages and analyzed with an analysis of covariance model, using baseline hemoglobin A1c and country as covariates. The operating characteristics of the trial were assessed by extensive simulation studies. Results Simulations demonstrated that the adaptive design would identify the correct doses 88% of the time, compared to as low as 6% for a fixed-dose design (the latter value based on frequentist decision rules analogous to the Bayesian decision rules for adaptive design). Conclusions This article discusses the decision rules used to select the dula dose(s); the mathematical details of the adaptive algorithm—including a description of the clinical utility index used to mathematically quantify the desirability of a dose based on safety and efficacy measurements; and a description of the simulation process and results that quantify the operating characteristics of the design. PMID:23294775

  13. On the kinetics of anaerobic power

    PubMed Central

    2012-01-01

    Background This study investigated two different mathematical models for the kinetics of anaerobic power. Model 1 assumes that the work power is linear with the work rate, while Model 2 assumes a linear relationship between the alactic anaerobic power and the rate of change of the aerobic power. In order to test these models, a cross country skier ran with poles on a treadmill at different exercise intensities. The aerobic power, based on the measured oxygen uptake, was used as input to the models, whereas the simulated blood lactate concentration was compared with experimental results. Thereafter, the metabolic rate from phosphocreatine break down was calculated theoretically. Finally, the models were used to compare phosphocreatine break down during continuous and interval exercises. Results Good similarity was found between experimental and simulated blood lactate concentration during steady state exercise intensities. The measured blood lactate concentrations were lower than simulated for intensities above the lactate threshold, but higher than simulated during recovery after high intensity exercise when the simulated lactate concentration was averaged over the whole lactate space. This fit was improved when the simulated lactate concentration was separated into two compartments; muscles + internal organs and blood. Model 2 gave a better behavior of alactic energy than Model 1 when compared against invasive measurements presented in the literature. During continuous exercise, Model 2 showed that the alactic energy storage decreased with time, whereas Model 1 showed a minimum value when steady state aerobic conditions were achieved. During interval exercise the two models showed similar patterns of alactic energy. Conclusions The current study provides useful insight on the kinetics of anaerobic power. Overall, our data indicate that blood lactate levels can be accurately modeled during steady state, and suggests a linear relationship between the alactic anaerobic power and the rate of change of the aerobic power. PMID:22830586

  14. A new segmentation strategy for processing magnetic anomaly detection data of shallow depth ferromagnetic pipeline

    NASA Astrophysics Data System (ADS)

    Feng, Shuo; Liu, Dejun; Cheng, Xing; Fang, Huafeng; Li, Caifang

    2017-04-01

    Magnetic anomalies produced by underground ferromagnetic pipelines because of the polarization of earth's magnetic field are used to obtain the information on the location, buried depth and other parameters of pipelines. In order to achieve a fast inversion and interpretation of measured data, it is necessary to develop a fast and stable forward method. Magnetic dipole reconstruction (MDR), as a kind of integration numerical method, is well suited for simulating a thin pipeline anomaly. In MDR the pipeline model must be cut into small magnetic dipoles through different segmentation methods. The segmentation method has an impact on the stability and speed of forward calculation. Rapid and accurate simulation of deep-buried pipelines has been achieved by exciting segmentation method. However, in practical measurement, the depth of underground pipe is uncertain. When it comes to the shallow-buried pipeline, the present segmentation may generate significant errors. This paper aims at solving this problem in three stages. First, the cause of inaccuracy is analyzed by simulation experiment. Secondly, new variable interval section segmentation is proposed based on the existing segmentation. It can help MDR method to obtain simulation results in a fast way under the premise of ensuring the accuracy of different depth models. Finally, the measured data is inversed based on new segmentation method. The result proves that the inversion based on the new segmentation can achieve fast and accurate inversion of depth parameters of underground pipes without being limited by pipeline depth.

  15. A Model-Based Systems Engineering Methodology for Employing Architecture In System Analysis: Developing Simulation Models Using Systems Modeling Language Products to Link Architecture and Analysis

    DTIC Science & Technology

    2016-06-01

    characteristics, experimental design techniques, and analysis methodologies that distinguish each phase of the MBSE MEASA. To ensure consistency... methodology . Experimental design selection, simulation analysis, and trade space analysis support the final two stages. Figure 27 segments the MBSE MEASA...rounding has the potential to increase the correlation between columns of the experimental design matrix. The design methodology presented in Vieira

  16. Genetic determinants of antithyroid drug-induced agranulocytosis by human leukocyte antigen genotyping and genome-wide association study

    PubMed Central

    Chen, Pei-Lung; Shih, Shyang-Rong; Wang, Pei-Wen; Lin, Ying-Chao; Chu, Chen-Chung; Lin, Jung-Hsin; Chen, Szu-Chi; Chang, Ching-Chung; Huang, Tien-Shang; Tsai, Keh Sung; Tseng, Fen-Yu; Wang, Chih-Yuan; Lu, Jin-Ying; Chiu, Wei-Yih; Chang, Chien-Ching; Chen, Yu-Hsuan; Chen, Yuan-Tsong; Fann, Cathy Shen-Jang; Yang, Wei-Shiung; Chang, Tien-Chun

    2015-01-01

    Graves' disease is the leading cause of hyperthyroidism affecting 1.0–1.6% of the population. Antithyroid drugs are the treatment cornerstone, but may cause life-threatening agranulocytosis. Here we conduct a two-stage association study on two separate subject sets (in total 42 agranulocytosis cases and 1,208 Graves' disease controls), using direct human leukocyte antigen genotyping and SNP-based genome-wide association study. We demonstrate HLA-B*38:02 (Armitage trend Pcombined=6.75 × 10−32) and HLA-DRB1*08:03 (Pcombined=1.83 × 10−9) as independent susceptibility loci. The genome-wide association study identifies the same signals. Estimated odds ratios for these two loci comparing effective allele carriers to non-carriers are 21.48 (95% confidence interval=11.13–41.48) and 6.13 (95% confidence interval=3.28–11.46), respectively. Carrying both HLA-B*38:02 and HLA-DRB1*08:03 increases odds ratio to 48.41 (Pcombined=3.32 × 10−21, 95% confidence interval=21.66–108.22). Our results could be useful for antithyroid-induced agranulocytosis and potentially for agranulocytosis caused by other chemicals. PMID:26151496

  17. Genetic determinants of antithyroid drug-induced agranulocytosis by human leukocyte antigen genotyping and genome-wide association study.

    PubMed

    Chen, Pei-Lung; Shih, Shyang-Rong; Wang, Pei-Wen; Lin, Ying-Chao; Chu, Chen-Chung; Lin, Jung-Hsin; Chen, Szu-Chi; Chang, Ching-Chung; Huang, Tien-Shang; Tsai, Keh Sung; Tseng, Fen-Yu; Wang, Chih-Yuan; Lu, Jin-Ying; Chiu, Wei-Yih; Chang, Chien-Ching; Chen, Yu-Hsuan; Chen, Yuan-Tsong; Fann, Cathy Shen-Jang; Yang, Wei-Shiung; Chang, Tien-Chun

    2015-07-07

    Graves' disease is the leading cause of hyperthyroidism affecting 1.0-1.6% of the population. Antithyroid drugs are the treatment cornerstone, but may cause life-threatening agranulocytosis. Here we conduct a two-stage association study on two separate subject sets (in total 42 agranulocytosis cases and 1,208 Graves' disease controls), using direct human leukocyte antigen genotyping and SNP-based genome-wide association study. We demonstrate HLA-B*38:02 (Armitage trend Pcombined=6.75 × 10(-32)) and HLA-DRB1*08:03 (Pcombined=1.83 × 10(-9)) as independent susceptibility loci. The genome-wide association study identifies the same signals. Estimated odds ratios for these two loci comparing effective allele carriers to non-carriers are 21.48 (95% confidence interval=11.13-41.48) and 6.13 (95% confidence interval=3.28-11.46), respectively. Carrying both HLA-B*38:02 and HLA-DRB1*08:03 increases odds ratio to 48.41 (Pcombined=3.32 × 10(-21), 95% confidence interval=21.66-108.22). Our results could be useful for antithyroid-induced agranulocytosis and potentially for agranulocytosis caused by other chemicals.

  18. Potential impact of harvesting on the population dynamics of two epiphytic bromeliads

    NASA Astrophysics Data System (ADS)

    Toledo-Aceves, Tarin; Hernández-Apolinar, Mariana; Valverde, Teresa

    2014-08-01

    Large numbers of epiphytes are extracted from cloud forests for ornamental use and illegal trade in Latin America. We examined the potential effects of different harvesting regimes on the population dynamics of the epiphytic bromeliads Tillandsia multicaulis and Tillandsia punctulata. The population dynamics of these species were studied over a 2-year period in a tropical montane cloud forest in Veracruz, Mexico. Prospective and retrospective analyses were used to identify which demographic processes and life-cycle stages make the largest relative contribution to variation in population growth rate (λ). The effect of simulated harvesting levels on population growth rates was analysed for both species. λ of both populations was highly influenced by survival (stasis), to a lesser extent by growth, and only slightly by fecundity. Vegetative growth played a central role in the population dynamics of these organisms. The λ value of the studied populations did not differ significantly from unity: T. multicaulis λ (95% confidence interval) = 0.982 (0.897-1.060) and T. punctulata λ = 0.967 (0.815-1.051), suggesting population stability. However, numerical simulation of different levels of extraction showed that λ would drop substantially even under very low (2%) harvesting levels. Matrix analysis revealed that T. multicaulis and T. punctulata populations are likely to decline and therefore commercial harvesting would be unsustainable. Based on these findings, management recommendations are outlined.

  19. A robust two-stage design identifying the optimal biological dose for phase I/II clinical trials.

    PubMed

    Zang, Yong; Lee, J Jack

    2017-01-15

    We propose a robust two-stage design to identify the optimal biological dose for phase I/II clinical trials evaluating both toxicity and efficacy outcomes. In the first stage of dose finding, we use the Bayesian model averaging continual reassessment method to monitor the toxicity outcomes and adopt an isotonic regression method based on the efficacy outcomes to guide dose escalation. When the first stage ends, we use the Dirichlet-multinomial distribution to jointly model the toxicity and efficacy outcomes and pick the candidate doses based on a three-dimensional volume ratio. The selected candidate doses are then seamlessly advanced to the second stage for dose validation. Both toxicity and efficacy outcomes are continuously monitored so that any overly toxic and/or less efficacious dose can be dropped from the study as the trial continues. When the phase I/II trial ends, we select the optimal biological dose as the dose obtaining the minimal value of the volume ratio within the candidate set. An advantage of the proposed design is that it does not impose a monotonically increasing assumption on the shape of the dose-efficacy curve. We conduct extensive simulation studies to examine the operating characteristics of the proposed design. The simulation results show that the proposed design has desirable operating characteristics across different shapes of the underlying true dose-toxicity and dose-efficacy curves. The software to implement the proposed design is available upon request. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  20. Project-Based Learning with an Online Peer Assessment System in a Photonics Instruction for Enhancing LED Design Skills

    ERIC Educational Resources Information Center

    Chang, Shu-Hsuan; Wu, Tsung-Chih; Kuo, Yen-Kuang; You, Li-Chih

    2012-01-01

    This study proposed a novel instructional approach, a two-stage LED simulation of Project-based learning (PBL) course with online peer assessment (OPA), and explored how to apply OPA to the different structured problems in a PBL course to enhance students' professional skills in LED design as well as meta-cognitive thinking. The participants of…

  1. Effect of the size of nanoparticles on their dissolution within metal-glass nanocomposites under sustained irradiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vu, T. H. Y., E-mail: thi-hai-yen.vu@polytechnique.edu; Ramjauny, Y.; Rizza, G.

    2016-01-21

    We investigate the dissolution law of metallic nanoparticles (NPs) under sustained irradiation. The system is composed of isolated spherical gold NPs (4–100 nm) embedded in an amorphous silica host matrix. Samples are irradiated at room temperature in the nuclear stopping power regime with 4 MeV Au ions for fluences up to 8 × 10{sup 16 }cm{sup −2}. Experimentally, the dependence of the dissolution kinetics on the irradiation fluence is linear for large NPs (45–100 nm) and exponential for small NPs (4–25 nm). A lattice-based kinetic Monte Carlo (KMC) code, which includes atomic diffusion and ballistic displacement events, is used to simulate the dynamical competition between irradiation effectsmore » and thermal healing. The KMC simulations allow for a qualitative description of the NP dissolution in two main stages, in good agreement with the experiment. Moreover, the perfect correlation obtained between the evolution of the simulated flux of ejected atoms and the dissolution rate in two stages implies that there exists an effect of the size of NPs on their dissolution and a critical size for the transition between the two stages. The Frost-Russell model providing an analytical solution for the dissolution rate, accounts well for the first dissolution stage but fails in reproducing the data for the second stage. An improved model obtained by including a size-dependent recoil generation rate permits fully describing the dissolution for any NP size. This proves, in particular, that the size effect on the generation rate is the principal reason for the existence of two regimes. Finally, our results also demonstrate that it is justified to use a unidirectional approximation to describe the dissolution of the NP under irradiation, because the solute concentration is particularly low in metal-glass nanocomposites.« less

  2. Hyper-X Stage Separation Trajectory Validation Studies

    NASA Technical Reports Server (NTRS)

    Tartabini, Paul V.; Bose, David M.; McMinn, John D.; Martin, John G.; Strovers, Brian K.

    2003-01-01

    An independent twelve degree-of-freedom simulation of the X-43A separation trajectory was created with the Program to Optimize Simulated trajectories (POST II). This simulation modeled the multi-body dynamics of the X-43A and its booster and included the effect of two pyrotechnically actuated pistons used to push the vehicles apart as well as aerodynamic interaction forces and moments between the two vehicles. The simulation was developed to validate trajectory studies conducted with a 14 degree-of-freedom simulation created early in the program using the Automatic Dynamic Analysis of Mechanics Systems (ADAMS) simulation software. The POST simulation was less detailed than the official ADAMS-based simulation used by the Project, but was simpler, more concise and ran faster, while providing similar results. The increase in speed provided by the POST simulation provided the Project with an alternate analysis tool. This tool was ideal for performing separation control logic trade studies that required the running of numerous Monte Carlo trajectories.

  3. Astrochronology of the Pliensbachian-Toarcian transition in the Foum Tillicht section (central High Atlas, Morroco)

    NASA Astrophysics Data System (ADS)

    Martinez, Mathieu; Bodin, Stéphane; Krencker, François-Nicolas

    2015-04-01

    The Pliensbachian and Toarcian stages (Early Jurassic) are marked by a series of carbon cycle disturbances, major climatic changes and severe faunal turnovers. An accurate knowledge of the timing of the Pliensbachian-Toarcian age is a key for quantifying fluxes and rhythms of faunal and geochemical processes during these major environmental perturbations. Although many studies provided astrochronological frameworks of the Toarcian Stage and the Toarcian oceanic anoxic event, no precise time frame exists for the Pliensbachian-Toarcian transition, often condensed in the previously studied sections. Here, we provide an astrochronology of the Pliensbachian-Toarcian transition in the Foum Tillicht section (central High Atlas, Morocco). The section is composed of decimetric hemipelagic marl-limestone alternations accompanied by cyclic fluctuations in the δ13Cmicrite. In this section, the marl-limestone alternations reflect cyclic sea-level/climatic changes, which triggers rhythmic migrations of the surrounding carbonate platforms and modulates the amount of carbonate exported to the basin. The studied interval encompasses 142.15 m of the section, from the base of the series to a hiatus in the Early Toarcian, marked by an erosional surface. The Pliensbachian-Toarcian (P-To) Event, a negative excursion in carbonate δ13Cmicrite, is observed pro parte in this studied interval. δ13Cmicrite measurements were performed every ~2 m at the base of the section and every 0.20 m within the P-To Event interval. Spectral analyses were performed using the multi-taper method and the evolutive Fast Fourier Transform to get the accurate assessment of the main significant periods and their evolution throughout the studied interval. Two main cycles are observed in the series: the 405-kyr eccentricity cycles is observed throughout the series, while the obliquity cycles is observed within the P-To Event, in the most densely sampled interval. The studied interval covers a 3.6-Myr interval. The duration of the part of P-To Event covered in this analysis is assessed at 0.70 Myr. In addition, the interval from the base of the Toarcian to the first occurrence of the calcareous nannofossil C. superbus has a duration assessed from 0.47 to 0.55 Myr. This duration is significantly higher than most of assessments obtained by former cyclostratigraphy analyses, showing that previous studies underestimated the duration of this interval, often condensed in the Western Tethys. This study shows the potential of the Foum Tillicht section to provide a refined time frame of the Pliensbachian-Toarcian boundary, which could be integrated in the next Geological Time Scale.

  4. Study on coming out of the shaft from ceramic sleeve in terms of the residual displacement

    NASA Astrophysics Data System (ADS)

    Zhang, G. W.; Noda, N.-A.; Sano, Y.; Sakai, H.

    2018-06-01

    Ceramic roller can be used in the heating furnace conveniently because of its high temperature resistance. However, the coming out of the shaft may often happen from the ceramic sleeve under repeated load. In this paper, a two-dimensional shrink fitted structure is considered by replacing the shaft with the inner plate and by replacing the sleeve with the outer plate. Based on the model with stopper, the FEM simulation is performed under alternate loading with certain intervals newly added. The analysis results show that the coming out failure can be explained from the residual displacement accumulation during these intervals.

  5. Comparison of the Efficacy and Efficiency of the Use of Virtual Reality Simulation With High-Fidelity Mannequins for Simulation-Based Training of Fiberoptic Bronchoscope Manipulation.

    PubMed

    Jiang, Bailin; Ju, Hui; Zhao, Ying; Yao, Lan; Feng, Yi

    2018-04-01

    This study compared the efficacy and efficiency of virtual reality simulation (VRS) with high-fidelity mannequin in the simulation-based training of fiberoptic bronchoscope manipulation in novices. Forty-six anesthesia residents with no experience in fiberoptic intubation were divided into two groups: VRS (group VRS) and mannequin (group M). After a standard didactic teaching session, group VRS trained 25 times on VRS, whereas group M performed the same process on a mannequin. After training, participants' performance was assessed on a mannequin five consecutive times. Procedure times during training were recorded as pooled data to construct learning curves. Procedure time and global rating scale scores of manipulation ability were compared between groups, as well as changes in participants' confidence after training. Plateaus in the learning curves were achieved after 19 (95% confidence interval = 15-26) practice sessions in group VRS and 24 (95% confidence interval = 20-32) in group M. There was no significant difference in procedure time [13.7 (6.6) vs. 11.9 (4.1) seconds, t' = 1.101, P = 0.278] or global rating scale [3.9 (0.4) vs. 3.8 (0.4), t = 0.791, P = 0.433] between groups. Participants' confidence increased after training [group VRS: 1.8 (0.7) vs. 3.9 (0.8), t = 8.321, P < 0.001; group M = 2.0 (0.7) vs. 4.0 (0.6), t = 13.948, P < 0.001] but did not differ significantly between groups. Virtual reality simulation is more efficient than mannequin in simulation-based training of flexible fiberoptic manipulation in novices, but similar effects can be achieved in both modalities after adequate training.

  6. Simulation of the Beating Heart Based on Physically Modeling aDeformable Balloon

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rohmer, Damien; Sitek, Arkadiusz; Gullberg, Grant T.

    2006-07-18

    The motion of the beating heart is complex and createsartifacts in SPECT and x-ray CT images. Phantoms such as the JaszczakDynamic Cardiac Phantom are used to simulate cardiac motion forevaluationof acquisition and data processing protocols used for cardiacimaging. Two concentric elastic membranes filled with water are connectedto tubing and pump apparatus for creating fluid flow in and out of theinner volume to simulate motion of the heart. In the present report, themovement of two concentric balloons is solved numerically in order tocreate a computer simulation of the motion of the moving membranes in theJaszczak Dynamic Cardiac Phantom. A system ofmore » differential equations,based on the physical properties, determine the motion. Two methods aretested for solving the system of differential equations. The results ofboth methods are similar providing a final shape that does not convergeto a trivial circular profile. Finally,a tomographic imaging simulationis performed by acquiring static projections of the moving shape andreconstructing the result to observe motion artifacts. Two cases aretaken into account: in one case each projection angle is sampled for ashort time interval and the other case is sampled for a longer timeinterval. The longer sampling acquisition shows a clear improvement indecreasing the tomographic streaking artifacts.« less

  7. Long-term outcome of cochlear implant in patients with chronic otitis media: one-stage surgery is equivalent to two-stage surgery.

    PubMed

    Jang, Jeong Hun; Park, Min-Hyun; Song, Jae-Jin; Lee, Jun Ho; Oh, Seung Ha; Kim, Chong-Sun; Chang, Sun O

    2015-01-01

    This study compared long-term speech performance after cochlear implantation (CI) between surgical strategies in patients with chronic otitis media (COM). Thirty patients with available open-set sentence scores measured more than 2 yr postoperatively were included: 17 who received one-stage surgeries (One-stage group), and the other 13 underwent two-stage surgeries (Two-stage group). Preoperative inflammatory status, intraoperative procedures, postoperative outcomes were compared. Among 17 patients in One-stage group, 12 underwent CI accompanied with the eradication of inflammation; CI without eradicating inflammation was performed on 3 patients; 2 underwent CIs via the transcanal approach. Thirteen patients in Two-stage group received the complete eradication of inflammation as first-stage surgery, and CI was performed as second-stage surgery after a mean interval of 8.2 months. Additional control of inflammation was performed in 2 patients at second-stage surgery for cavity problem and cholesteatoma, respectively. There were 2 cases of electrode exposure as postoperative complication in the two-stage group; new electrode arrays were inserted and covered by local flaps. The open-set sentence scores of Two-stage group were not significantly higher than those of One-stage group at 1, 2, 3, and 5 yr postoperatively. Postoperative long-term speech performance is equivalent when either of two surgical strategies is used to treat appropriately selected candidates.

  8. A web-based Tamsui River flood early-warning system with correction of real-time water stage using monitoring data

    NASA Astrophysics Data System (ADS)

    Liao, H. Y.; Lin, Y. J.; Chang, H. K.; Shang, R. K.; Kuo, H. C.; Lai, J. S.; Tan, Y. C.

    2017-12-01

    Taiwan encounters heavy rainfalls frequently. There are three to four typhoons striking Taiwan every year. To provide lead time for reducing flood damage, this study attempt to build a flood early-warning system (FEWS) in Tanshui River using time series correction techniques. The predicted rainfall is used as the input for the rainfall-runoff model. Then, the discharges calculated by the rainfall-runoff model is converted to the 1-D river routing model. The 1-D river routing model will output the simulating water stages in 487 cross sections for the future 48-hr. The downstream water stage at the estuary in 1-D river routing model is provided by storm surge simulation. Next, the water stages of 487 cross sections are corrected by time series model such as autoregressive (AR) model using real-time water stage measurements to improve the predicted accuracy. The results of simulated water stages are displayed on a web-based platform. In addition, the models can be performed remotely by any users with web browsers through a user interface. The on-line video surveillance images, real-time monitoring water stages, and rainfalls can also be shown on this platform. If the simulated water stage exceeds the embankments of Tanshui River, the alerting lights of FEWS will be flashing on the screen. This platform runs periodically and automatically to generate the simulation graphic data of flood water stages for flood disaster prevention and decision making.

  9. Modeling and simulation of maintenance treatment in first-line non-small cell lung cancer with external validation.

    PubMed

    Han, Kelong; Claret, Laurent; Sandler, Alan; Das, Asha; Jin, Jin; Bruno, Rene

    2016-07-13

    Maintenance treatment (MTx) in responders following first-line treatment has been investigated and practiced for many cancers. Modeling and simulation may support interpretation of interim data and development decisions. We aimed to develop a modeling framework to simulate overall survival (OS) for MTx in NSCLC using tumor growth inhibition (TGI) data. TGI metrics were estimated using longitudinal tumor size data from two Phase III first-line NSCLC studies evaluating bevacizumab and erlotinib as MTx in 1632 patients. Baseline prognostic factors and TGI metric estimates were assessed in multivariate parametric models to predict OS. The OS model was externally validated by simulating a third independent NSCLC study (n = 253) based on interim TGI data (up to progression-free survival database lock). The third study evaluated pemetrexed + bevacizumab vs. bevacizumab alone as MTx. Time-to-tumor-growth (TTG) was the best TGI metric to predict OS. TTG, baseline tumor size, ECOG score, Asian ethnicity, age, and gender were significant covariates in the final OS model. The OS model was qualified by simulating OS distributions and hazard ratios (HR) in the two studies used for model-building. Simulations of the third independent study based on interim TGI data showed that pemetrexed + bevacizumab MTx was unlikely to significantly prolong OS vs. bevacizumab alone given the current sample size (predicted HR: 0.81; 95 % prediction interval: 0.59-1.09). Predicted median OS was 17.3 months and 14.7 months in both arms, respectively. These simulations are consistent with the results of the final OS analysis published 2 years later (observed HR: 0.87; 95 % confidence interval: 0.63-1.21). Final observed median OS was 17.1 months and 13.2 months in both arms, respectively, consistent with our predictions. A robust TGI-OS model was developed for MTx in NSCLC. TTG captures treatment effect. The model successfully predicted the OS outcomes of an independent study based on interim TGI data and thus may facilitate trial simulation and interpretation of interim data. The model was built based on erlotinib data and externally validated using pemetrexed data, suggesting that TGI-OS models may be treatment-independent. The results supported the use of longitudinal tumor size and TTG as endpoints in early clinical oncology studies.

  10. Effect of long interval between hyperthermochemoradiation therapy and surgery for rectal cancer on apoptosis, proliferation and tumor response.

    PubMed

    Kato, Toshihide; Fujii, Takaaki; Ide, Munenori; Takada, Takahiro; Sutoh, Toshinaga; Morita, Hiroki; Yajima, Reina; Yamaguchi, Satoru; Tsutsumi, Soichi; Asao, Takayuki; Oyama, Tetsunari; Kuwano, Hiroyuki

    2014-06-01

    Neoadjuvant chemoradiotherapy is commonly used to improve the local control and resectability of locally advanced rectal cancer, with surgery performed after an interval of a number of weeks. We have been conducting a clinical trial of preoperative chemoradiotherapy in combination with regional hyperthermia (hyperthermo-chemoradiation therapy; HCRT) for locally advanced rectal cancer. In the current study we assessed the effect of a longer (>10 weeks) interval after neoadjuvant HCRT on pathological response, oncological outcome and especially on apoptosis, proliferation and p53 expression in patients with rectal cancer. Forty-eight patients with proven rectal adenocarcinoma who underwent HCRT followed by surgery were identified for inclusion in this study. Patients were divided into two groups according to the interval between HCRT and surgery, ≤ 10 weeks (short-interval group) and >10 weeks (long-interval group). Patients in the long-interval group had a significantly higher rate of pathological complete response (pCR) (43.5% vs. 16.0%) than patients of the short-interval group. Patients of the long-interval group had a significantly higher rate of down-staging of T-stage (78.3% vs. 36.0%) and relatively higher rate of that of N-stage (52.2% vs. 36.0%) than patients of the short-interval group. Furthermore, apoptosis in the long-interval group was relatively higher compared to that of the short-interval group, without a significant difference in the Ki-67 proliferative index and expression of p53 in the primary tumor. In conclusion, we demonstrated that a longer interval after HCRT (>10 weeks) seemed to result in a better chance of a pCR, a result confirmed by the trends in tumor response markers, including apoptosis, proliferation and p53 expression. Copyright© 2014 International Institute of Anticancer Research (Dr. John G. Delinassios), All rights reserved.

  11. Measured and simulated soil water evaporation from four Great Plains soils

    USDA-ARS?s Scientific Manuscript database

    The amount of soil water lost during stage one and stage two soil water evaporation is of interest to crop water use modelers. The ratio of measured soil surface temperature (Ts) to air temperature (Ta) was tested as a signal for the transition in soil water evaporation from stage one to stage two d...

  12. A Two-Stage Probabilistic Approach to Manage Personal Worklist in Workflow Management Systems

    NASA Astrophysics Data System (ADS)

    Han, Rui; Liu, Yingbo; Wen, Lijie; Wang, Jianmin

    The application of workflow scheduling in managing individual actor's personal worklist is one area that can bring great improvement to business process. However, current deterministic work cannot adapt to the dynamics and uncertainties in the management of personal worklist. For such an issue, this paper proposes a two-stage probabilistic approach which aims at assisting actors to flexibly manage their personal worklists. To be specific, the approach analyzes every activity instance's continuous probability of satisfying deadline at the first stage. Based on this stochastic analysis result, at the second stage, an innovative scheduling strategy is proposed to minimize the overall deadline violation cost for an actor's personal worklist. Simultaneously, the strategy recommends the actor a feasible worklist of activity instances which meet the required bottom line of successful execution. The effectiveness of our approach is evaluated in a real-world workflow management system and with large scale simulation experiments.

  13. Estimating accuracy of land-cover composition from two-stage cluster sampling

    USGS Publications Warehouse

    Stehman, S.V.; Wickham, J.D.; Fattorini, L.; Wade, T.D.; Baffetta, F.; Smith, J.H.

    2009-01-01

    Land-cover maps are often used to compute land-cover composition (i.e., the proportion or percent of area covered by each class), for each unit in a spatial partition of the region mapped. We derive design-based estimators of mean deviation (MD), mean absolute deviation (MAD), root mean square error (RMSE), and correlation (CORR) to quantify accuracy of land-cover composition for a general two-stage cluster sampling design, and for the special case of simple random sampling without replacement (SRSWOR) at each stage. The bias of the estimators for the two-stage SRSWOR design is evaluated via a simulation study. The estimators of RMSE and CORR have small bias except when sample size is small and the land-cover class is rare. The estimator of MAD is biased for both rare and common land-cover classes except when sample size is large. A general recommendation is that rare land-cover classes require large sample sizes to ensure that the accuracy estimators have small bias. ?? 2009 Elsevier Inc.

  14. Appearance of deterministic mixing behavior from ensembles of fluctuating hydrodynamics simulations of the Richtmyer-Meshkov instability

    NASA Astrophysics Data System (ADS)

    Narayanan, Kiran; Samtaney, Ravi

    2018-04-01

    We obtain numerical solutions of the two-fluid fluctuating compressible Navier-Stokes (FCNS) equations, which consistently account for thermal fluctuations from meso- to macroscales, in order to study the effect of such fluctuations on the mixing behavior in the Richtmyer-Meshkov instability (RMI). The numerical method used was successfully verified in two stages: for the deterministic fluxes by comparison against air-SF6 RMI experiment, and for the stochastic terms by comparison against the direct simulation Monte Carlo results for He-Ar RMI. We present results from fluctuating hydrodynamic RMI simulations for three He-Ar systems having length scales with decreasing order of magnitude that span from macroscopic to mesoscopic, with different levels of thermal fluctuations characterized by a nondimensional Boltzmann number (Bo). For a multidimensional FCNS system on a regular Cartesian grid, when using a discretization of a space-time stochastic flux Z (x ,t ) of the form Z (x ,t ) →1 /√{h ▵ t }N (i h ,n Δ t ) for spatial interval h , time interval Δ t , h , and Gaussian noise N should be greater than h0, with h0 corresponding to a cell volume that contains a sufficient number of molecules of the fluid such that the fluctuations are physically meaningful and produce the right equilibrium spectrum. For the mesoscale RMI systems simulated, it was desirable to use a cell size smaller than this limit in order to resolve the viscous shock. This was achieved by using a modified regularization of the noise term via Z (h3,h03)>x ,t →1 /√ ▵ t max(i h ,n Δ t ) , with h0=ξ h ∀h

  15. Ares I-X First Stage Separation Loads and Dynamics Reconstruction

    NASA Technical Reports Server (NTRS)

    Demory, Lee; Rooker, BIll; Jarmulowicz, Marc; Glaese, John

    2011-01-01

    The Ares I-X flight test provided NASA with the opportunity to test hardware and gather critical data to ensure the success of future Ares I flights. One of the primary test flight objectives was to evaluate the environment during First Stage separation to better understand the conditions that the J-2X second stage engine will experience at ignition [1]. A secondary objective was to evaluate the effectiveness of the stage separation motors. The Ares I-X flight test vehicle was successfully launched on October 29, 2009, achieving most of its primary and secondary test objectives. Ground based video camera recordings of the separation event appeared to show recontact of the First Stage and the Upper Stage Simulator followed by an unconventional tumbling of the Upper Stage Simulator. Closer inspection of the videos and flight test data showed that recontact did not occur. Also, the motion during staging was as predicted through CFD analysis performed during the Ares I-X development. This paper describes the efforts to reconstruct the vehicle dynamics and loads through the staging event by means of a time integrated simulation developed in TREETOPS, a multi-body dynamics software tool developed at NASA [2]. The simulation was built around vehicle mass and geometry properties at the time of staging and thrust profiles for the first stage solid rocket motor as well as for the booster deceleration motors and booster tumble motors. Aerodynamic forces were determined by models created from a combination of wind tunnel testing and CFD. The initial conditions such as position, velocity, and attitude were obtained from the Best Estimated Trajectory (BET), which is compiled from multiple ground based and vehicle mounted instruments. Dynamic loads were calculated by subtracting the inertial forces from the applied forces. The simulation results were compared to the Best Estimated Trajectory, accelerometer flight data, and to ground based video.

  16. An Agent-Based Modeling Template for a Cohort of Veterans with Diabetic Retinopathy.

    PubMed

    Day, Theodore Eugene; Ravi, Nathan; Xian, Hong; Brugh, Ann

    2013-01-01

    Agent-based models are valuable for examining systems where large numbers of discrete individuals interact with each other, or with some environment. Diabetic Veterans seeking eye care at a Veterans Administration hospital represent one such cohort. The objective of this study was to develop an agent-based template to be used as a model for a patient with diabetic retinopathy (DR). This template may be replicated arbitrarily many times in order to generate a large cohort which is representative of a real-world population, upon which in-silico experimentation may be conducted. Agent-based template development was performed in java-based computer simulation suite AnyLogic Professional 6.6. The model was informed by medical data abstracted from 535 patient records representing a retrospective cohort of current patients of the VA St. Louis Healthcare System Eye clinic. Logistic regression was performed to determine the predictors associated with advancing stages of DR. Predicted probabilities obtained from logistic regression were used to generate the stage of DR in the simulated cohort. The simulated cohort of DR patients exhibited no significant deviation from the test population of real-world patients in proportion of stage of DR, duration of diabetes mellitus (DM), or the other abstracted predictors. Simulated patients after 10 years were significantly more likely to exhibit proliferative DR (P<0.001). Agent-based modeling is an emerging platform, capable of simulating large cohorts of individuals based on manageable data abstraction efforts. The modeling method described may be useful in simulating many different conditions where course of disease is described in categorical stages.

  17. Computational Model of Population Dynamics Based on the Cell Cycle and Local Interactions

    NASA Astrophysics Data System (ADS)

    Oprisan, Sorinel Adrian; Oprisan, Ana

    2005-03-01

    Our study bridges cellular (mesoscopic) level interactions and global population (macroscopic) dynamics of carcinoma. The morphological differences and transitions between well and smooth defined benign tumors and tentacular malignat tumors suggest a theoretical analysis of tumor invasion based on the development of mathematical models exhibiting bifurcations of spatial patterns in the density of tumor cells. Our computational model views the most representative and clinically relevant features of oncogenesis as a fight between two distinct sub-systems: the immune system of the host and the neoplastic system. We implemented the neoplastic sub-system using a three-stage cell cycle: active, dormant, and necrosis. The second considered sub-system consists of cytotoxic active (effector) cells — EC, with a very broad phenotype ranging from NK cells to CTL cells, macrophages, etc. Based on extensive numerical simulations, we correlated the fractal dimensions for carcinoma, which could be obtained from tumor imaging, with the malignat stage. Our computational model was able to also simulate the effects of surgical, chemotherapeutical, and radiotherapeutical treatments.

  18. Noise robustness of a combined phase retrieval and reconstruction method for phase-contrast tomography.

    PubMed

    Kongskov, Rasmus Dalgas; Jørgensen, Jakob Sauer; Poulsen, Henning Friis; Hansen, Per Christian

    2016-04-01

    Classical reconstruction methods for phase-contrast tomography consist of two stages: phase retrieval and tomographic reconstruction. A novel algebraic method combining the two was suggested by Kostenko et al. [Opt. Express21, 12185 (2013)OPEXFF1094-408710.1364/OE.21.012185], and preliminary results demonstrated improved reconstruction compared with a given two-stage method. Using simulated free-space propagation experiments with a single sample-detector distance, we thoroughly compare the novel method with the two-stage method to address limitations of the preliminary results. We demonstrate that the novel method is substantially more robust toward noise; our simulations point to a possible reduction in counting times by an order of magnitude.

  19. On the appropriateness of applying chi-square distribution based confidence intervals to spectral estimates of helicopter flyover data

    NASA Technical Reports Server (NTRS)

    Rutledge, Charles K.

    1988-01-01

    The validity of applying chi-square based confidence intervals to far-field acoustic flyover spectral estimates was investigated. Simulated data, using a Kendall series and experimental acoustic data from the NASA/McDonnell Douglas 500E acoustics test, were analyzed. Statistical significance tests to determine the equality of distributions of the simulated and experimental data relative to theoretical chi-square distributions were performed. Bias and uncertainty errors associated with the spectral estimates were easily identified from the data sets. A model relating the uncertainty and bias errors to the estimates resulted, which aided in determining the appropriateness of the chi-square distribution based confidence intervals. Such confidence intervals were appropriate for nontonally associated frequencies of the experimental data but were inappropriate for tonally associated estimate distributions. The appropriateness at the tonally associated frequencies was indicated by the presence of bias error and noncomformity of the distributions to the theoretical chi-square distribution. A technique for determining appropriate confidence intervals at the tonally associated frequencies was suggested.

  20. Laboratory simulation studies of steady-state and potential catalytic effects in the ROPE{trademark} process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guffey, F.D.; Holper, P.A.

    The Western Research Institute is currently developing a process for the recovery of distillable liquid products from alternate fossil fuel sources such as tar sand and oil shale. The processing concept is based on recycling a fraction of the produced oil back into the reactor with the raw resource. This concept is termed the recycle oil pyrolysis and extraction (ROPE{sup TM}) process. The conversion of the alternate resource to a liquid fuel is performed in two stages. The first recovery stage is performed at moderate temperatures (325--420{degrees}C [617--788{degrees}F]) in the presence of product oil recycle. The second stage is performedmore » at higher temperatures (450--540{degrees}C [842--1004{degrees}F]) in the absence of product oil. The experiments reported here were performed Asphalt Ridge tar sand in the all-glass laboratory simulation reactor to simulate (1) the recycling of SAE 50 weight oil in the recycle oil pyrolysis zone and (2) to evaluate the potential catalytic effects of the sand matrix.« less

  1. Laboratory simulation studies of steady-state and potential catalytic effects in the ROPE trademark process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guffey, F.D.; Holper, P.A.

    The Western Research Institute is currently developing a process for the recovery of distillable liquid products from alternate fossil fuel sources such as tar sand and oil shale. The processing concept is based on recycling a fraction of the produced oil back into the reactor with the raw resource. This concept is termed the recycle oil pyrolysis and extraction (ROPE{sup TM}) process. The conversion of the alternate resource to a liquid fuel is performed in two stages. The first recovery stage is performed at moderate temperatures (325--420{degrees}C (617--788{degrees}F)) in the presence of product oil recycle. The second stage is performedmore » at higher temperatures (450--540{degrees}C (842--1004{degrees}F)) in the absence of product oil. The experiments reported here were performed Asphalt Ridge tar sand in the all-glass laboratory simulation reactor to simulate (1) the recycling of SAE 50 weight oil in the recycle oil pyrolysis zone and (2) to evaluate the potential catalytic effects of the sand matrix.« less

  2. Two-stage unilateral versus one-stage bilateral single-port sympathectomy for palmar and axillary hyperhidrosis†

    PubMed Central

    Ibrahim, Mohsen; Menna, Cecilia; Andreetti, Claudio; Ciccone, Anna Maria; D'Andrilli, Antonio; Maurizi, Giulio; Poggi, Camilla; Vanni, Camilla; Venuta, Federico; Rendina, Erino Angelo

    2013-01-01

    OBJECTIVES Video-assisted thoracoscopic sympathectomy is currently the best treatment for palmar and axillary hyperhidrosis. It can be performed through either one or two stages of surgery. This study aimed to evaluate the operative and postoperative results of two-stage unilateral vs one-stage bilateral thoracoscopic sympathectomy. METHODS From November 1995 to February 2011, 270 patients with severe palmar and/or axillary hyperhidrosis were recruited for this study. One hundred and thirty patients received one-stage bilateral, single-port video-assisted thoracoscopic sympathectomy (one-stage group) and 140, two-stage unilateral, single-port video-assisted thoracoscopic sympathectomy, with a mean time interval of 4 months between the procedures (two-stage group). RESULTS The mean postoperative follow-up period was 12.5 (range: 1–24 months). After surgery, hands and axillae of all patients were dry and warm. Sixteen (12%) patients of the one-stage group and 15 (11%) of the two-stage group suffered from mild/moderate pain (P = 0.8482). The mean operative time was 38 ± 5 min in the one-stage group and 39 ± 8 min in the two-stage group (P = 0.199). Pneumothorax occurred in 8 (6%) patients of the one-stage group and in 11 (8%) of the two-stage group. Compensatory sweating occurred in 25 (19%) patients of the one-stage group and in 6 (4%) of the two-stage group (P = 0.0001). No patients developed Horner's syndrome. CONCLUSIONS Both two-stage unilateral and one-stage bilateral single-port video-assisted thoracoscopic sympathectomies are effective, safe and minimally invasive procedures. Two-stage unilateral sympathectomy can be performed with a lower occurrence of compensatory sweating, improving permanently the quality of life in patients with palmar and axillary hyperhidrosis. PMID:23442937

  3. Two-stage unilateral versus one-stage bilateral single-port sympathectomy for palmar and axillary hyperhidrosis.

    PubMed

    Ibrahim, Mohsen; Menna, Cecilia; Andreetti, Claudio; Ciccone, Anna Maria; D'Andrilli, Antonio; Maurizi, Giulio; Poggi, Camilla; Vanni, Camilla; Venuta, Federico; Rendina, Erino Angelo

    2013-06-01

    Video-assisted thoracoscopic sympathectomy is currently the best treatment for palmar and axillary hyperhidrosis. It can be performed through either one or two stages of surgery. This study aimed to evaluate the operative and postoperative results of two-stage unilateral vs one-stage bilateral thoracoscopic sympathectomy. From November 1995 to February 2011, 270 patients with severe palmar and/or axillary hyperhidrosis were recruited for this study. One hundred and thirty patients received one-stage bilateral, single-port video-assisted thoracoscopic sympathectomy (one-stage group) and 140, two-stage unilateral, single-port video-assisted thoracoscopic sympathectomy, with a mean time interval of 4 months between the procedures (two-stage group). The mean postoperative follow-up period was 12.5 (range: 1-24 months). After surgery, hands and axillae of all patients were dry and warm. Sixteen (12%) patients of the one-stage group and 15 (11%) of the two-stage group suffered from mild/moderate pain (P = 0.8482). The mean operative time was 38 ± 5 min in the one-stage group and 39 ± 8 min in the two-stage group (P = 0.199). Pneumothorax occurred in 8 (6%) patients of the one-stage group and in 11 (8%) of the two-stage group. Compensatory sweating occurred in 25 (19%) patients of the one-stage group and in 6 (4%) of the two-stage group (P = 0.0001). No patients developed Horner's syndrome. Both two-stage unilateral and one-stage bilateral single-port video-assisted thoracoscopic sympathectomies are effective, safe and minimally invasive procedures. Two-stage unilateral sympathectomy can be performed with a lower occurrence of compensatory sweating, improving permanently the quality of life in patients with palmar and axillary hyperhidrosis.

  4. Results from Binary Black Hole Simulations in Astrophysics Applications

    NASA Technical Reports Server (NTRS)

    Baker, John G.

    2007-01-01

    Present and planned gravitational wave observatories are opening a new astronomical window to the sky. A key source of gravitational waves is the merger of two black holes. The Laser Interferometer Space Antenna (LISA), in particular, is expected to observe these events with signal-to-noise ratio's in the thousands. To fully reap the scientific benefits of these observations requires a detailed understanding, based on numerical simulations, of the predictions of General Relativity for the waveform signals. New techniques for simulating binary black hole mergers, introduced two years ago, have led to dramatic advances in applied numerical simulation work. Over the last two years, numerical relativity researchers have made tremendous strides in understanding the late stages of binary black hole mergers. Simulations have been applied to test much of the basic physics of binary black hole interactions, showing robust results for merger waveform predictions, and illuminating such phenomena as spin-precession. Calculations have shown that merging systems can be kicked at up to 2500 km/s by the thrust from asymmetric emission. Recently, long lasting simulations of ten or more orbits allow tests of post-Newtonian (PN) approximation results for radiation from the last orbits of the binary's inspiral. Already, analytic waveform models based PN techniques with incorporated information from numerical simulations may be adequate for observations with current ground based observatories. As new advances in simulations continue to rapidly improve our theoretical understanding of the systems, it seems certain that high-precision predictions will be available in time for LISA and other advanced ground-based instruments. Future gravitational wave observatories are expected to make precision.

  5. The statistical significance of error probability as determined from decoding simulations for long codes

    NASA Technical Reports Server (NTRS)

    Massey, J. L.

    1976-01-01

    The very low error probability obtained with long error-correcting codes results in a very small number of observed errors in simulation studies of practical size and renders the usual confidence interval techniques inapplicable to the observed error probability. A natural extension of the notion of a 'confidence interval' is made and applied to such determinations of error probability by simulation. An example is included to show the surprisingly great significance of as few as two decoding errors in a very large number of decoding trials.

  6. Combining area-based and individual-level data in the geostatistical mapping of late-stage cancer incidence.

    PubMed

    Goovaerts, Pierre

    2009-01-01

    This paper presents a geostatistical approach to incorporate individual-level data (e.g. patient residences) and area-based data (e.g. rates recorded at census tract level) into the mapping of late-stage cancer incidence, with an application to breast cancer in three Michigan counties. Spatial trends in cancer incidence are first estimated from census data using area-to-point binomial kriging. This prior model is then updated using indicator kriging and individual-level data. Simulation studies demonstrate the benefits of this two-step approach over methods (kernel density estimation and indicator kriging) that process only residence data.

  7. Coupled simulation of CFD-flight-mechanics with a two-species-gas-model for the hot rocket staging

    NASA Astrophysics Data System (ADS)

    Li, Yi; Reimann, Bodo; Eggers, Thino

    2016-11-01

    The hot rocket staging is to separate the lowest stage by directly ignite the continuing-stage-motor. During the hot staging, the rocket stages move in a harsh dynamic environment. In this work, the hot staging dynamics of a multistage rocket is studied using the coupled simulation of Computational Fluid Dynamics and Flight Mechanics. Plume modeling is crucial for a coupled simulation with high fidelity. A 2-species-gas model is proposed to simulate the flow system of the rocket during the staging: the free-stream is modeled as "cold air" and the exhausted plume from the continuing-stage-motor is modeled with an equivalent calorically-perfect-gas that approximates the properties of the plume at the nozzle exit. This gas model can well comprise between the computation accuracy and efficiency. In the coupled simulations, the Navier-Stokes equations are time-accurately solved in moving system, with which the Flight Mechanics equations can be fully coupled. The Chimera mesh technique is utilized to deal with the relative motions of the separated stages. A few representative staging cases with different initial flight conditions of the rocket are studied with the coupled simulation. The torque led by the plume-induced-flow-separation at the aft-wall of the continuing-stage is captured during the staging, which can assist the design of the controller of the rocket. With the increasing of the initial angle-of-attack of the rocket, the staging quality becomes evidently poorer, but the separated stages are generally stable when the initial angle-of-attack of the rocket is small.

  8. Study of hypervelocity projectile impact on thick metal plates

    DOE PAGES

    Roy, Shawoon K.; Trabia, Mohamed; O’Toole, Brendan; ...

    2016-01-01

    Hypervelocity impacts generate extreme pressure and shock waves in impacted targets that undergo severe localized deformation within a few microseconds. These impact experiments pose unique challenges in terms of obtaining accurate measurements. Similarly, simulating these experiments is not straightforward. This paper proposed an approach to experimentally measure the velocity of the back surface of an A36 steel plate impacted by a projectile. All experiments used a combination of a two-stage light-gas gun and the photonic Doppler velocimetry (PDV) technique. The experimental data were used to benchmark and verify computational studies. Two different finite-element methods were used to simulate the experiments:more » Lagrangian-based smooth particle hydrodynamics (SPH) and Eulerian-based hydrocode. Both codes used the Johnson-Cook material model and the Mie-Grüneisen equation of state. Experiments and simulations were compared based on the physical damage area and the back surface velocity. Finally, the results of this study showed that the proposed simulation approaches could be used to reduce the need for expensive experiments.« less

  9. A note on the efficiencies of sampling strategies in two-stage Bayesian regional fine mapping of a quantitative trait.

    PubMed

    Chen, Zhijian; Craiu, Radu V; Bull, Shelley B

    2014-11-01

    In focused studies designed to follow up associations detected in a genome-wide association study (GWAS), investigators can proceed to fine-map a genomic region by targeted sequencing or dense genotyping of all variants in the region, aiming to identify a functional sequence variant. For the analysis of a quantitative trait, we consider a Bayesian approach to fine-mapping study design that incorporates stratification according to a promising GWAS tag SNP in the same region. Improved cost-efficiency can be achieved when the fine-mapping phase incorporates a two-stage design, with identification of a smaller set of more promising variants in a subsample taken in stage 1, followed by their evaluation in an independent stage 2 subsample. To avoid the potential negative impact of genetic model misspecification on inference we incorporate genetic model selection based on posterior probabilities for each competing model. Our simulation study shows that, compared to simple random sampling that ignores genetic information from GWAS, tag-SNP-based stratified sample allocation methods reduce the number of variants continuing to stage 2 and are more likely to promote the functional sequence variant into confirmation studies. © 2014 WILEY PERIODICALS, INC.

  10. Numerical and experimental investigation of strip deformation in cage roll forming process for pipes with low ratio of thickness/diameter

    NASA Astrophysics Data System (ADS)

    Kasaei, M. M.; Naeini, H. Moslemi; Tehrani, M. Salmani; Tafti, R. Azizi

    2011-01-01

    Cage roll forming is one of the advanced methods of cold roll forming process which is used widely for producing ERW pipes. In addition to decreasing the production cost and time, using cage roll forming provides smooth deformation on the strip. Few studies can be found about cage roll forming because of its complexity, and the available knowledge is experience-based more than science-based. In this paper, deformation of pipes with low ratio of thickness/diameter is investigated by 3D finite element simulation in Marc-Mentat software. Edge buckling defect in cage roll forming of low ratio of thickness/diameter pipes is very important. Due to direct influence of longitudinal strain on the edge buckling phenomenon, longitudinal strains at the edge and center line of the strip are investigated and high risk stands are introduced. The deformed strip is predicted using the simulation results and effects of each cage forming stage on the deformed strip profile are specified. In order to verify the simulation results, strip width and opening distance of the two edges in different forming stages are obtained from the simulations and compared with the experimental data which were measured from the production line. A good agreement between the experimental and simulated results is observed.

  11. Simulation of Turbine Tone Noise Generation Using a Turbomachinery Aerodynamics Solver

    NASA Technical Reports Server (NTRS)

    VanZante, Dale; Envia, Edmane

    2010-01-01

    As turbofan engine bypass ratios continue to increase, the contribution of the turbine to the engine noise signature is receiving more attention. Understanding the relative importance of the various turbine noise generation mechanisms and the characteristics of the turbine acoustic transmission loss are essential ingredients in developing robust reduced-order models for predicting the turbine noise signature. A computationally based investigation has been undertaken to help guide the development of a turbine noise prediction capability that does not rely on empiricism. As proof-of-concept for this approach, two highly detailed numerical simulations of the unsteady flow field inside the first stage of a modern high-pressure turbine were carried out. The simulations were computed using TURBO, which is an unsteady Reynolds-Averaged Navier-Stokes code capable of multi-stage simulations. Spectral and modal analysis of the unsteady pressure data from the numerical simulation of the turbine stage show a circumferential modal distribution that is consistent with the Tyler-Sofrin rule. Within the high-pressure turbine, the interaction of velocity, pressure and temperature fluctuations with the downstream blade rows are all possible tone noise source mechanisms. We have taken the initial step in determining the source strength hierarchy by artificially reducing the level of temperature fluctuations in the turbine flowfield. This was accomplished by changing the vane cooling flow temperature in order to mitigate the vane thermal wake in the second of the two simulations. The results indicated that, despite a dramatic change in the vane cooling flow, the computed modal levels changed very little indicating that the contribution of temperature fluctuations to the overall pressure field is rather small compared with the viscous and potential field interaction mechanisms.

  12. Climate-based models for West Nile Culex mosquito vectors in the Northeastern US

    NASA Astrophysics Data System (ADS)

    Gong, Hongfei; Degaetano, Arthur T.; Harrington, Laura C.

    2011-05-01

    Climate-based models simulating Culex mosquito population abundance in the Northeastern US were developed. Two West Nile vector species, Culex pipiens and Culex restuans, were included in model simulations. The model was optimized by a parameter-space search within biological bounds. Mosquito population dynamics were driven by major environmental factors including temperature, rainfall, evaporation rate and photoperiod. The results show a strong correlation between the timing of early population increases (as early warning of West Nile virus risk) and decreases in late summer. Simulated abundance was highly correlated with actual mosquito capture in New Jersey light traps and validated with field data. This climate-based model simulates the population dynamics of both the adult and immature mosquito life stage of Culex arbovirus vectors in the Northeastern US. It is expected to have direct and practical application for mosquito control and West Nile prevention programs.

  13. Communication interval selection in distributed heterogeneous simulation of large-scale dynamical systems

    NASA Astrophysics Data System (ADS)

    Lucas, Charles E.; Walters, Eric A.; Jatskevich, Juri; Wasynczuk, Oleg; Lamm, Peter T.

    2003-09-01

    In this paper, a new technique useful for the numerical simulation of large-scale systems is presented. This approach enables the overall system simulation to be formed by the dynamic interconnection of the various interdependent simulations, each representing a specific component or subsystem such as control, electrical, mechanical, hydraulic, or thermal. Each simulation may be developed separately using possibly different commercial-off-the-shelf simulation programs thereby allowing the most suitable language or tool to be used based on the design/analysis needs. These subsystems communicate the required interface variables at specific time intervals. A discussion concerning the selection of appropriate communication intervals is presented herein. For the purpose of demonstration, this technique is applied to a detailed simulation of a representative aircraft power system, such as that found on the Joint Strike Fighter (JSF). This system is comprised of ten component models each developed using MATLAB/Simulink, EASY5, or ACSL. When the ten component simulations were distributed across just four personal computers (PCs), a greater than 15-fold improvement in simulation speed (compared to the single-computer implementation) was achieved.

  14. Time Triggered Ethernet System Testing Means and Method

    NASA Technical Reports Server (NTRS)

    Smithgall, William Todd (Inventor); Hall, Brendan (Inventor); Varadarajan, Srivatsan (Inventor)

    2014-01-01

    Methods and apparatus are provided for evaluating the performance of a Time Triggered Ethernet (TTE) system employing Time Triggered (TT) communication. A real TTE system under test (SUT) having real input elements communicating using TT messages with output elements via one or more first TTE switches during a first time interval schedule established for the SUT. A simulation system is also provided having input simulators that communicate using TT messages via one or more second TTE switches with the same output elements during a second time interval schedule established for the simulation system. The first and second time interval schedules are off-set slightly so that messages from the input simulators, when present, arrive at the output elements prior to messages from the analogous real inputs, thereby having priority over messages from the real inputs and causing the system to operate based on the simulated inputs when present.

  15. Enhancing learning through optimal sequencing of web-based and manikin simulators to teach shock physiology in the medical curriculum.

    PubMed

    Cendan, Juan C; Johnson, Teresa R

    2011-12-01

    The Association of American Medical Colleges has encouraged educators to investigate proper linkage of simulation experiences with medical curricula. The authors aimed to determine if student knowledge and satisfaction differ between participation in web-based and manikin simulations for learning shock physiology and treatment and to determine if a specific training sequencing had a differential effect on learning. All 40 second-year medical students participated in a randomized, counterbalanced study with two interventions: group 1 (n = 20) participated in a web-based simulation followed by a manikin simulation and group 2 (n = 20) participated in reverse order. Knowledge and attitudes were documented. Mixed-model ANOVA indicated a significant main effect of time (F(1,38) = 18.6, P < 0.001, η(p)(2) = 0.33). Group 1 scored significantly higher on quiz 2 (81.5%) than on quiz 1 (74.3%, t(19) = 3.9, P = 0.001), for an observed difference of 7.2% (95% confidence interval: 3.3, 11.0). Mean quiz scores of group 2 did not differ significantly (quiz 1: 77.0% and quiz 2: 79.7%). There was no significant main effect of group or a group by time interaction effect. Students rated the simulations as equally effective in teaching shock physiology (P = 0.88); however, the manikin simulation was regarded as more effective in teaching shock treatment (P < 0.001). Most students (73.7%) preferred the manikin simulation. The two simulations may be of similar efficacy for educating students on the physiology of shock; however, the data suggest improved learning when web-based simulation precedes manikin use. This finding warrants further study.

  16. Model Based Optimization of Integrated Low Voltage DC-DC Converter for Energy Harvesting Applications

    NASA Astrophysics Data System (ADS)

    Jayaweera, H. M. P. C.; Muhtaroğlu, Ali

    2016-11-01

    A novel model based methodology is presented to determine optimal device parameters for the fully integrated ultra low voltage DC-DC converter for energy harvesting applications. The proposed model feasibly contributes to determine the maximum efficient number of charge pump stages to fulfill the voltage requirement of the energy harvester application. The proposed DC-DC converter based power consumption model enables the analytical derivation of the charge pump efficiency when utilized simultaneously with the known LC tank oscillator behavior under resonant conditions, and voltage step up characteristics of the cross-coupled charge pump topology. The verification of the model has been done using a circuit simulator. The optimized system through the established model achieves more than 40% maximum efficiency yielding 0.45 V output with single stage, 0.75 V output with two stages, and 0.9 V with three stages for 2.5 kΩ, 3.5 kΩ and 5 kΩ loads respectively using 0.2 V input.

  17. A Compact Two-Stage 120 W GaN High Power Amplifier for SweepSAR Radar Systems

    NASA Technical Reports Server (NTRS)

    Thrivikraman, Tushar; Horst, Stephen; Price, Douglas; Hoffman, James; Veilleux, Louise

    2014-01-01

    This work presents the design and measured results of a fully integrated switched power two-stage GaN HEMT high-power amplifier (HPA) achieving 60% power-added efficiency at over 120Woutput power. This high-efficiency GaN HEMT HPA is an enabling technology for L-band SweepSAR interferometric instruments that enable frequent repeat intervals and high-resolution imagery. The L-band HPA was designed using space-qualified state-of-the-art GaN HEMT technology. The amplifier exhibits over 34 dB of power gain at 51 dBm of output power across an 80 MHz bandwidth. The HPA is divided into two stages, an 8 W driver stage and 120 W output stage. The amplifier is designed for pulsed operation, with a high-speed DC drain switch operating at the pulsed-repetition interval and settles within 200 ns. In addition to the electrical design, a thermally optimized package was designed, that allows for direct thermal radiation to maintain low-junction temperatures for the GaN parts maximizing long-term reliability. Lastly, real radar waveforms are characterized and analysis of amplitude and phase stability over temperature demonstrate ultra-stable operation over temperature using integrated bias compensation circuitry allowing less than 0.2 dB amplitude variation and 2 deg phase variation over a 70 C range.

  18. Distributed Simulation as a modelling tool for the development of a simulation-based training programme for cardiovascular specialties.

    PubMed

    Kelay, Tanika; Chan, Kah Leong; Ako, Emmanuel; Yasin, Mohammad; Costopoulos, Charis; Gold, Matthew; Kneebone, Roger K; Malik, Iqbal S; Bello, Fernando

    2017-01-01

    Distributed Simulation is the concept of portable, high-fidelity immersive simulation. Here, it is used for the development of a simulation-based training programme for cardiovascular specialities. We present an evidence base for how accessible, portable and self-contained simulated environments can be effectively utilised for the modelling, development and testing of a complex training framework and assessment methodology. Iterative user feedback through mixed-methods evaluation techniques resulted in the implementation of the training programme. Four phases were involved in the development of our immersive simulation-based training programme: ( 1) initial conceptual stage for mapping structural criteria and parameters of the simulation training framework and scenario development ( n  = 16), (2) training facility design using Distributed Simulation , (3) test cases with clinicians ( n  = 8) and collaborative design, where evaluation and user feedback involved a mixed-methods approach featuring (a) quantitative surveys to evaluate the realism and perceived educational relevance of the simulation format and framework for training and (b) qualitative semi-structured interviews to capture detailed feedback including changes and scope for development. Refinements were made iteratively to the simulation framework based on user feedback, resulting in (4) transition towards implementation of the simulation training framework, involving consistent quantitative evaluation techniques for clinicians ( n  = 62). For comparative purposes, clinicians' initial quantitative mean evaluation scores for realism of the simulation training framework, realism of the training facility and relevance for training ( n  = 8) are presented longitudinally, alongside feedback throughout the development stages from concept to delivery, including the implementation stage ( n  = 62). Initially, mean evaluation scores fluctuated from low to average, rising incrementally. This corresponded with the qualitative component, which augmented the quantitative findings; trainees' user feedback was used to perform iterative refinements to the simulation design and components (collaborative design), resulting in higher mean evaluation scores leading up to the implementation phase. Through application of innovative Distributed Simulation techniques, collaborative design, and consistent evaluation techniques from conceptual, development, and implementation stages, fully immersive simulation techniques for cardiovascular specialities are achievable and have the potential to be implemented more broadly.

  19. Airborne Precision Spacing for Dependent Parallel Operations Interface Study

    NASA Technical Reports Server (NTRS)

    Volk, Paul M.; Takallu, M. A.; Hoffler, Keith D.; Weiser, Jarold; Turner, Dexter

    2012-01-01

    This paper describes a usability study of proposed cockpit interfaces to support Airborne Precision Spacing (APS) operations for aircraft performing dependent parallel approaches (DPA). NASA has proposed an airborne system called Pair Dependent Speed (PDS) which uses their Airborne Spacing for Terminal Arrival Routes (ASTAR) algorithm to manage spacing intervals. Interface elements were designed to facilitate the input of APS-DPA spacing parameters to ASTAR, and to convey PDS system information to the crew deemed necessary and/or helpful to conduct the operation, including: target speed, guidance mode, target aircraft depiction, and spacing trend indication. In the study, subject pilots observed recorded simulations using the proposed interface elements in which the ownship managed assigned spacing intervals from two other arriving aircraft. Simulations were recorded using the Aircraft Simulation for Traffic Operations Research (ASTOR) platform, a medium-fidelity simulator based on a modern Boeing commercial glass cockpit. Various combinations of the interface elements were presented to subject pilots, and feedback was collected via structured questionnaires. The results of subject pilot evaluations show that the proposed design elements were acceptable, and that preferable combinations exist within this set of elements. The results also point to potential improvements to be considered for implementation in future experiments.

  20. Numerical study of viscous dissipation during single drop impact on wetted surfaces

    NASA Astrophysics Data System (ADS)

    An, Yi; Yang, Shihao; Liu, Qingquan

    2017-11-01

    The splashing crown by the impact of a drop on a liquid film has been studied extensively since Yarin and Weiss (JFM 1995). The motion of the crown base is believed to be kinematic which results in the equation R =(2/3H)1/4(T-T0)1/2. This equation is believed to overestimate the crown size by about 15%. While Trojillo and Lee (PoF 2001) find the influence of the Re not notable. Considering the dissipation in the initial stage of the impact, Gao and Li (PRE, 2015) obtained a well-validated equation. However, how to estimate the dissipation is still worth some detailed discussion. We carried out a series of VOF simulations with special focusing on the influence of viscosity. The simulation is based on the Basilisk code to utilize adaptive mesh refinement. We found that the role of dissipation could be divided into three stages. When T> 1, the commonly used shallow water equation provides a good approximation while the initial condition should be considered properly. Between this two stages, the viscous dissipation is the governing factor and thus causes inaccurate estimation of the crown base motion in the third stage. This work was financially supported by the National Natural Science Foundation of China (No. 11672310, No. 11372326).

  1. One- and two-stage Arrhenius models for pharmaceutical shelf life prediction.

    PubMed

    Fan, Zhewen; Zhang, Lanju

    2015-01-01

    One of the most challenging aspects of the pharmaceutical development is the demonstration and estimation of chemical stability. It is imperative that pharmaceutical products be stable for two or more years. Long-term stability studies are required to support such shelf life claim at registration. However, during drug development to facilitate formulation and dosage form selection, an accelerated stability study with stressed storage condition is preferred to quickly obtain a good prediction of shelf life under ambient storage conditions. Such a prediction typically uses Arrhenius equation that describes relationship between degradation rate and temperature (and humidity). Existing methods usually rely on the assumption of normality of the errors. In addition, shelf life projection is usually based on confidence band of a regression line. However, the coverage probability of a method is often overlooked or under-reported. In this paper, we introduce two nonparametric bootstrap procedures for shelf life estimation based on accelerated stability testing, and compare them with a one-stage nonlinear Arrhenius prediction model. Our simulation results demonstrate that one-stage nonlinear Arrhenius method has significant lower coverage than nominal levels. Our bootstrap method gave better coverage and led to a shelf life prediction closer to that based on long-term stability data.

  2. Methodical aspects of rearing decapod larvae, Pagurus bernhardus (Paguridae) and Carcinus maenas (Portunidae)

    NASA Astrophysics Data System (ADS)

    Dawirs, R. R.

    1982-12-01

    Improved methods for experimental rearing of Pagurus bernhardus and Carcinus maenas larvae are presented. Isolated maintenance was found essential for reliable statistical evaluation of results obtained from stages older than zoea-1. Only by isolated rearing is it possible to calculate mean values ±95% confidence intervals of stage duration. Mean values (without confidence intervals) can only be given for group-reared larvae if mortality is zero. Compared to group rearing, isolated rearing led to better survival, shorter periods of development and stimulated growth. Due to different swimming behavior P. bernhardus zoeae needed larger water volumes than Carcinus maenas larvae. P. bernhardus zoeae were reared with best results when isolated in Petri dishes (ca. 50 ml). They fed on newly hatched brine shrimp nauplii ( Artemia spp.). P. bernhardus megalopa did not require any gastropod shell or substratum; it developed best in glass vials without any food. C. maenas larvae could be reared most sucessfully in glass vials (ca 20 ml) under a simulated day-night regime (LD 16:8); constant darkness had a detrimental effect on development, leading to prolonged stage-duration times. C. maenas larvae were fed a mixture of newly hatched brine shrimp naupli and rotifers ( Brachionus plicatilis).

  3. Studies on thermal decomposition behaviors of polypropylene using molecular dynamics simulation

    NASA Astrophysics Data System (ADS)

    Huang, Jinbao; He, Chao; Tong, Hong; Pan, Guiying

    2017-11-01

    Polypropylene (PP) is one of the main components of waste plastics. In order to understand the mechanism of PP thermal decomposition, the pyrolysis behaviour of PP has been simulated from 300 to 1000 K in periodic boundary conditions by molecular dynamic method, based on AMBER force field. The simulation results show that the pyrolysis process of PP can mostly be divided into three stages: low temperature pyrolysis stage, intermediate temperature stage and high temperature pyrolysis stage. PP pyrolysis is typical of random main-chain scission, and the possible formation mechanism of major pyrolysis products was analyzed.

  4. Virtual reality simulation training for health professions trainees in gastrointestinal endoscopy.

    PubMed

    Walsh, Catharine M; Sherlock, Mary E; Ling, Simon C; Carnahan, Heather

    2012-06-13

    Traditionally, training in gastrointestinal endoscopy has been based upon an apprenticeship model, with novice endoscopists learning basic skills under the supervision of experienced preceptors in the clinical setting. Over the last two decades, however, the growing awareness of the need for patient safety has brought the issue of simulation-based training to the forefront. While the use of simulation-based training may have important educational and societal advantages, the effectiveness of virtual reality gastrointestinal endoscopy simulators has yet to be clearly demonstrated. To determine whether virtual reality simulation training can supplement and/or replace early conventional endoscopy training (apprenticeship model) in diagnostic oesophagogastroduodenoscopy, colonoscopy and/or sigmoidoscopy for health professions trainees with limited or no prior endoscopic experience. Health professions, educational and computer databases were searched until November 2011 including The Cochrane Central Register of Controlled Trials, MEDLINE, EMBASE, Scopus, Web of Science, Biosis Previews, CINAHL, Allied and Complementary Medicine Database, ERIC, Education Full Text, CBCA Education, Career and Technical Education @ Scholars Portal, Education Abstracts @ Scholars Portal, Expanded Academic ASAP @ Scholars Portal, ACM Digital Library, IEEE Xplore, Abstracts in New Technologies and Engineering and Computer & Information Systems Abstracts. The grey literature until November 2011 was also searched. Randomised and quasi-randomised clinical trials comparing virtual reality endoscopy (oesophagogastroduodenoscopy, colonoscopy and sigmoidoscopy) simulation training versus any other method of endoscopy training including conventional patient-based training, in-job training, training using another form of endoscopy simulation (e.g. low-fidelity simulator), or no training (however defined by authors) were included.  Trials comparing one method of virtual reality training versus another method of virtual reality training (e.g. comparison of two different virtual reality simulators) were also included. Only trials measuring outcomes on humans in the clinical setting (as opposed to animals or simulators) were included. Two authors (CMS, MES) independently assessed the eligibility and methodological quality of trials, and extracted data on the trial characteristics and outcomes. Due to significant clinical and methodological heterogeneity it was not possible to pool study data in order to perform a meta-analysis. Where data were available for each continuous outcome we calculated standardized mean difference with 95% confidence intervals based on intention-to-treat analysis. Where data were available for dichotomous outcomes we calculated relative risk with 95% confidence intervals based on intention-to-treat-analysis. Thirteen trials, with 278 participants, met the inclusion criteria. Four trials compared simulation-based training with conventional patient-based endoscopy training (apprenticeship model) whereas nine trials compared simulation-based training with no training. Only three trials were at low risk of bias. Simulation-based training, as compared with no training, generally appears to provide participants with some advantage over their untrained peers as measured by composite score of competency, independent procedure completion, performance time, independent insertion depth, overall rating of performance or competency error rate and mucosal visualization. Alternatively, there was no conclusive evidence that simulation-based training was superior to conventional patient-based training, although data were limited. The results of this systematic review indicate that virtual reality endoscopy training can be used to effectively supplement early conventional endoscopy training (apprenticeship model) in diagnostic oesophagogastroduodenoscopy, colonoscopy and/or sigmoidoscopy for health professions trainees with limited or no prior endoscopic experience. However, there remains insufficient evidence to advise for or against the use of virtual reality simulation-based training as a replacement for early conventional endoscopy training (apprenticeship model) for health professions trainees with limited or no prior endoscopic experience. There is a great need for the development of a reliable and valid measure of endoscopic performance prior to the completion of further randomised clinical trials with high methodological quality.

  5. Coupling effect and control strategies of the maglev dual-stage inertially stabilization system based on frequency-domain analysis.

    PubMed

    Lin, Zhuchong; Liu, Kun; Zhang, Li; Zeng, Delin

    2016-09-01

    Maglev dual-stage inertially stabilization (MDIS) system is a newly proposed system which combines a conventional two-axis gimbal assembly and a 5-DOF (degree of freedom) magnetic bearing with vernier tilting capacity to perform dual-stage stabilization for the LOS of the suspended optical instrument. Compared with traditional dual-stage system, maglev dual-stage system exhibits different characteristics due to the negative position stiffness of the magnetic forces, which introduces additional coupling in the dual stage control system. In this paper, the coupling effect on the system performance is addressed based on frequency-domain analysis, including disturbance rejection, fine stage saturation and coarse stage structural resonance suppression. The difference between various control strategies is also discussed, including pile-up(PU), stabilize-follow (SF) and stabilize-compensate (SC). A number of principles for the design of a maglev dual stage system are proposed. A general process is also suggested, which leads to a cost-effective design striking a balance between high performance and complexity. At last, a simulation example is presented to illustrate the arguments in the paper. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  6. A WENO-Limited, ADER-DT, Finite-Volume Scheme for Efficient, Robust, and Communication-Avoiding Multi-Dimensional Transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Norman, Matthew R

    2014-01-01

    The novel ADER-DT time discretization is applied to two-dimensional transport in a quadrature-free, WENO- and FCT-limited, Finite-Volume context. Emphasis is placed on (1) the serial and parallel computational properties of ADER-DT and this framework and (2) the flexibility of ADER-DT and this framework in efficiently balancing accuracy with other constraints important to transport applications. This study demonstrates a range of choices for the user when approaching their specific application while maintaining good parallel properties. In this method, genuine multi-dimensionality, single-step and single-stage time stepping, strict positivity, and a flexible range of limiting are all achieved with only one parallel synchronizationmore » and data exchange per time step. In terms of parallel data transfers per simulated time interval, this improves upon multi-stage time stepping and post-hoc filtering techniques such as hyperdiffusion. This method is evaluated with standard transport test cases over a range of limiting options to demonstrate quantitatively and qualitatively what a user should expect when employing this method in their application.« less

  7. Finite element simulation of ultrasonic waves in corroded reinforced concrete for early-stage corrosion detection

    NASA Astrophysics Data System (ADS)

    Tang, Qixiang; Yu, Tzuyang

    2017-04-01

    In reinforced concrete (RC) structures, corrosion of steel rebar introduces internal stress at the interface between rebar and concrete, ultimately leading to debonding and separation between rebar and concrete. Effective early-stage detection of steel rebar corrosion can significantly reduce maintenance costs and enable early-stage repair. In this paper, ultrasonic detection of early-stage steel rebar corrosion inside concrete is numerically investigated using the finite element method (FEM). Commercial FEM software (ABAQUS) was used in all simulation cases. Steel rebar was simplified and modeled by a cylindrical structure. 1MHz ultrasonic elastic waves were generated at the interface between rebar and concrete. Two-dimensional plain strain element was adopted in all FE models. Formation of surface rust in rebar was modeled by changing material properties and expanding element size in order to simulate the rust interface between rebar and concrete and the presence of interfacial stress. Two types of surface rust (corroded regions) were considered. Time domain and frequency domain responses of displacement were studied. From our simulation result, two corrosion indicators, baseline (b) and center frequency (fc) were proposed for detecting and quantifying corrosion.

  8. Evaluation of the effect of different stretching patterns on force decay and tensile properties of elastomeric ligatures

    PubMed Central

    Aminian, Amin; Nakhaei, Samaneh; Agahi, Raha Habib; Rezaeizade, Masoud; Aliabadi, Hamed Mirzazadeh; Heidarpour, Majid

    2015-01-01

    Background: There have been numerous researches on elastomeric ligatures, but clinical conditions in different stages of treatment are not exactly similar to laboratory conditions. The aim of this in vitro study was to simulate clinical conditions and evaluate the effect of three stretching patterns on the amount of force, tensile strength (TS) and extension to TS of the elastomers during 8 weeks. Materials and Methods: Forces, TS and extension to TS of two different brands of elastomers were measured at initial, 24 h and 2, 4, and 8-week intervals using a testing machine. During the study period, the elastomers were stored in three different types of jig (uniform stretching, 1 and 3 mm point stretching) designed by the computer-aided design and computer-aided manufacturing technique in order to simulate the different stages of orthodontic treatment. Results: The elastomeric ligatures under study exhibited a similar force decay pattern. The maximum force decay occurred during the first 24 h (49.9% ± 15%) and the amount of force decay was 75.7% ± 8% after 8 weeks. In general, the TS decreased during the study period, and the amount of extension to TS increased. Conclusion: Although the elastic behavior of all ligatures under study was similar, the amount of residual force, TS and extension to TS increased in elastomers under point stretching pattern. PMID:26759597

  9. Evaluation of the effect of different stretching patterns on force decay and tensile properties of elastomeric ligatures.

    PubMed

    Aminian, Amin; Nakhaei, Samaneh; Agahi, Raha Habib; Rezaeizade, Masoud; Aliabadi, Hamed Mirzazadeh; Heidarpour, Majid

    2015-01-01

    There have been numerous researches on elastomeric ligatures, but clinical conditions in different stages of treatment are not exactly similar to laboratory conditions. The aim of this in vitro study was to simulate clinical conditions and evaluate the effect of three stretching patterns on the amount of force, tensile strength (TS) and extension to TS of the elastomers during 8 weeks. Forces, TS and extension to TS of two different brands of elastomers were measured at initial, 24 h and 2, 4, and 8-week intervals using a testing machine. During the study period, the elastomers were stored in three different types of jig (uniform stretching, 1 and 3 mm point stretching) designed by the computer-aided design and computer-aided manufacturing technique in order to simulate the different stages of orthodontic treatment. The elastomeric ligatures under study exhibited a similar force decay pattern. The maximum force decay occurred during the first 24 h (49.9% ± 15%) and the amount of force decay was 75.7% ± 8% after 8 weeks. In general, the TS decreased during the study period, and the amount of extension to TS increased. Although the elastic behavior of all ligatures under study was similar, the amount of residual force, TS and extension to TS increased in elastomers under point stretching pattern.

  10. Reference interval estimation: Methodological comparison using extensive simulations and empirical data.

    PubMed

    Daly, Caitlin H; Higgins, Victoria; Adeli, Khosrow; Grey, Vijay L; Hamid, Jemila S

    2017-12-01

    To statistically compare and evaluate commonly used methods of estimating reference intervals and to determine which method is best based on characteristics of the distribution of various data sets. Three approaches for estimating reference intervals, i.e. parametric, non-parametric, and robust, were compared with simulated Gaussian and non-Gaussian data. The hierarchy of the performances of each method was examined based on bias and measures of precision. The findings of the simulation study were illustrated through real data sets. In all Gaussian scenarios, the parametric approach provided the least biased and most precise estimates. In non-Gaussian scenarios, no single method provided the least biased and most precise estimates for both limits of a reference interval across all sample sizes, although the non-parametric approach performed the best for most scenarios. The hierarchy of the performances of the three methods was only impacted by sample size and skewness. Differences between reference interval estimates established by the three methods were inflated by variability. Whenever possible, laboratories should attempt to transform data to a Gaussian distribution and use the parametric approach to obtain the most optimal reference intervals. When this is not possible, laboratories should consider sample size and skewness as factors in their choice of reference interval estimation method. The consequences of false positives or false negatives may also serve as factors in this decision. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  11. Tapping linked to function and structure in premanifest and symptomatic Huntington disease(e–Pub ahead of print)

    PubMed Central

    Bechtel, N.; Scahill, R.I.; Rosas, H.D.; Acharya, T.; van den Bogaard, S.J.A.; Jauffret, C.; Say, M.J.; Sturrock, A.; Johnson, H.; Onorato, C.E.; Salat, D.H.; Durr, A.; Leavitt, B.R.; Roos, R.A.C.; Landwehrmeyer, G.B.; Langbehn, D.R.; Stout, J.C.; Tabrizi, S.J.; Reilmann, R.

    2010-01-01

    Objective: Motor signs are functionally disabling features of Huntington disease. Characteristic motor signs define disease manifestation. Their severity and onset are assessed by the Total Motor Score of the Unified Huntington's Disease Rating Scale, a categorical scale limited by interrater variability and insensitivity in premanifest subjects. More objective, reliable, and precise measures are needed which permit clinical trials in premanifest populations. We hypothesized that motor deficits can be objectively quantified by force-transducer-based tapping and correlate with disease burden and brain atrophy. Methods: A total of 123 controls, 120 premanifest, and 123 early symptomatic gene carriers performed a speeded and a metronome tapping task in the multicenter study TRACK-HD. Total Motor Score, CAG repeat length, and MRIs were obtained. The premanifest group was subdivided into A and B, based on the proximity to estimated disease onset, the manifest group into stages 1 and 2, according to their Total Functional Capacity scores. Analyses were performed centrally and blinded. Results: Tapping variability distinguished between all groups and subgroups in both tasks and correlated with 1) disease burden, 2) clinical motor phenotype, 3) gray and white matter atrophy, and 4) cortical thinning. Speeded tapping was more sensitive to the detection of early changes. Conclusion: Tapping deficits are evident throughout manifest and premanifest stages. Deficits are more pronounced in later stages and correlate with clinical scores as well as regional brain atrophy, which implies a link between structure and function. The ability to track motor phenotype progression with force-transducer-based tapping measures will be tested prospectively in the TRACK-HD study. GLOSSARY CoV = coefficient of variation; DBS = disease burden score; Freq = frequency; HD = Huntington disease; ICV = intracranial volume; IOI = interonset interval; ΔIOI = deviation from interonset interval; IPI = interpeak interval; ΔIPI = deviation from interpeak interval; ITI = intertap interval; log = logarithmic; MT = metronome tapping; ΔMTI = deviation from midtap interval; preHD = premanifest Huntington disease; RT = reaction time; ST = speeded tapping; TD = tap duration; TF = tapping force; TFC = Total Functional Capacity; UHDRS = Unified Huntington's Disease Rating Scale; UHDRS-TMS = Unified Huntington's Disease Rating Scale-Total Motor Score; VBM = voxel-based morphometry. PMID:21068430

  12. Unique orientations and rotational dynamics of a 1-butyl-3-methyl-imidazolium hexafluorophosphate ionic liquid at the gas-liquid interface: the effects of the hydrogen bond and hydrophobic interactions.

    PubMed

    Yang, Deshuai; Fu, Fangjia; Li, Li; Yang, Zhen; Wan, Zheng; Luo, Yi; Hu, Na; Chen, Xiangshu; Zeng, Guixiang

    2018-05-07

    Here we report a series of molecular dynamics simulations for the orientations and rotational dynamics of the 1-butyl-3-methyl-imidazoliumhexafluorophosphate ([BMIM][PF 6 ]) ionic liquid (IL) at the gas-liquid interface. Compared to the bulk phase, the [BMIM] + cations at the interface prefer to orientate themselves with their imidazolium rings perpendicular to the gas-IL interface plane and their butyl chains pointing toward the vacuum phase. Such a preferential orientation can be attributed to the combined effect of the hydrophobic interactions and the optimum loss of hydrogen bonds (HBs). More interestingly, our simulation results demonstrate that the butyl chains of cations exhibit a two-stage rotational behavior at the interface, where the butyl chains are always in the vacuum phase at the first stage and the second stage corresponds to the butyl chains migrating from the vacuum phase into the liquid phase. A further detailed analysis reveals that their rotational motions at the first stage are mainly determined by the weakened HB strength at the interface while those at the second stage are dominated by their hydrophobic interactions. Such a unique rotational behavior of the butyl chains is significantly different from those of the anions and the imidazolium rings of cations at the interface due to the lack of existence of hydrophobic interaction in the cases of the latter two. In addition, a new and simple time correlation function (TCF) was constructed here for the first time to quantitatively identify the relevant hydrophobic interaction of alkyl chains. Therefore, our simulation results provide a molecular-level understanding of the effects of HB and hydrophobic interactions on the unique properties of imidazolium-based ILs at the gas-liquid interface.

  13. Return volatility interval analysis of stock indexes during a financial crash

    NASA Astrophysics Data System (ADS)

    Li, Wei-Shen; Liaw, Sy-Sang

    2015-09-01

    We investigate the interval between return volatilities above a certain threshold q for 10 countries data sets during the 2008/2009 global financial crisis, and divide these data into several stages according to stock price tendencies: plunging stage (stage 1), fluctuating or rebounding stage (stage 2) and soaring stage (stage 3). For different thresholds q, the cumulative distribution function always satisfies a power law tail distribution. We find the absolute value of the power-law exponent is lowest in stage 1 for various types of markets, and increases monotonically from stage 1 to stage 3 in emerging markets. The fractal dimension properties of the return volatility interval series provide some surprising results. We find that developed markets have strong persistence and transform to weaker correlation in the plunging and soaring stages. In contrast, emerging markets fail to exhibit such a transformation, but rather show a constant-correlation behavior with the recurrence of extreme return volatility in corresponding stages during a crash. We believe this long-memory property found in recurrence-interval series, especially for developed markets, plays an important role in volatility clustering.

  14. Geometry and Pore Pressure Shape the Pattern of the Tectonic Tremors Activity on the Deep San Andreas Fault with Periodic, Period-Multiplying Recurrence Intervals

    NASA Astrophysics Data System (ADS)

    Mele Veedu, D.; Barbot, S.

    2014-12-01

    A never before recorded pattern of periodic, chaotic, and doubled, earthquake recurrence intervals was detected in the sequence of deep tectonic tremors of the Parkfield segment of the San Andreas Fault (Shelly, 2010). These observations may be the most puzzling seismological observations of the last decade: The pattern was regularly oscillating with a period doubling of 3 and 6 days from mid-2003 until it was disrupted by the 2004 Mw 6.0 Parkfield earthquake. But by the end of 2007, the previous pattern resumed. Here, we assume that the complex dynamics of the tremors is caused by slip on a single asperity on the San Andreas Fault with homogeneous friction properties. We developed a three-dimensional model based on the rate-and-state friction law with a single patch and simulated fault slip during all stages of the earthquake cycle using the boundary integral method of Lapusta & Liu (2009). We find that homogeneous penny-shaped asperities cannot induce the observed period doubling, and that the geometry itself of the velocity-weakening asperity is critical in enabling the characteristic behavior of the Parkfield tremors. We also find that the system is sensitive to perturbations in pore pressure, such that the ones induced by the 2004 Parkfield earthquake are sufficient to dramatically alter the dynamics of the tremors for two years, as observed by Shelly (2010). An important finding is that tremor magnitude is amplified more by macroscopic slip duration on the source asperity than by slip amplitude, indicative of a time-dependent process for the breakage of micro-asperities that leads to seismic emissions. Our simulated event duration is in the range of 25 to 150 seconds, closely comparable to the event duration of a typical Parkfield tectonic tremor. Our simulations reproduce the unique observations of the Parkfield tremor activity. This study vividly illustrates the critical role of geometry in shaping the dynamics of fault slip evolution on a seismogenic fault.

  15. The String Stability of a Trajectory-Based Interval Management Algorithm in the Midterm Airspace

    NASA Technical Reports Server (NTRS)

    Swieringa, Kurt A.

    2015-01-01

    NASA's first Air Traffic Management (ATM) Technology Demonstration (ATD-1) was created to facilitate the transition of mature ATM technologies from the laboratory to operational use. The technologies selected for demonstration are the Traffic Management Advisor with Terminal Metering (TMA-TM), which provides precise time-based scheduling in the terminal airspace; Controller Managed Spacing (CMS), which provides terminal controllers with decision support tools enabling precise schedule conformance; and Interval Management (IM), which consists of flight deck automation that enables aircraft to achieve or maintain a precise spacing interval behind a target aircraft. As the percentage of IM equipped aircraft increases, controllers may provide IM clearances to sequences, or strings, of IM-equipped aircraft. It is important for these strings to maintain stable performance. This paper describes an analytic analysis of the string stability of the latest version of NASA's IM algorithm and a fast-time simulation designed to characterize the string performance of the IM algorithm. The analytic analysis showed that the spacing algorithm has stable poles, indicating that a spacing error perturbation will be reduced as a function of string position. The fast-time simulation investigated IM operations at two airports using constraints associated with the midterm airspace, including limited information of the target aircraft's intended speed profile and limited information of the wind forecast on the target aircraft's route. The results of the fast-time simulation demonstrated that the performance of the spacing algorithm is acceptable for strings of moderate length; however, there is some degradation in IM performance as a function of string position.

  16. An empirical comparison of SPM preprocessing parameters to the analysis of fMRI data.

    PubMed

    Della-Maggiore, Valeria; Chau, Wilkin; Peres-Neto, Pedro R; McIntosh, Anthony R

    2002-09-01

    We present the results from two sets of Monte Carlo simulations aimed at evaluating the robustness of some preprocessing parameters of SPM99 for the analysis of functional magnetic resonance imaging (fMRI). Statistical robustness was estimated by implementing parametric and nonparametric simulation approaches based on the images obtained from an event-related fMRI experiment. Simulated datasets were tested for combinations of the following parameters: basis function, global scaling, low-pass filter, high-pass filter and autoregressive modeling of serial autocorrelation. Based on single-subject SPM analysis, we derived the following conclusions that may serve as a guide for initial analysis of fMRI data using SPM99: (1) The canonical hemodynamic response function is a more reliable basis function to model the fMRI time series than HRF with time derivative. (2) Global scaling should be avoided since it may significantly decrease the power depending on the experimental design. (3) The use of a high-pass filter may be beneficial for event-related designs with fixed interstimulus intervals. (4) When dealing with fMRI time series with short interstimulus intervals (<8 s), the use of first-order autoregressive model is recommended over a low-pass filter (HRF) because it reduces the risk of inferential bias while providing a relatively good power. For datasets with interstimulus intervals longer than 8 seconds, temporal smoothing is not recommended since it decreases power. While the generalizability of our results may be limited, the methods we employed can be easily implemented by other scientists to determine the best parameter combination to analyze their data.

  17. Multiple Sensing Application on Wireless Sensor Network Simulation using NS3

    NASA Astrophysics Data System (ADS)

    Kurniawan, I. F.; Bisma, R.

    2018-01-01

    Hardware enhancement provides opportunity to install various sensor device on single monitoring node which then enables users to acquire multiple data simultaneously. Constructing multiple sensing application in NS3 is a challenging task since numbers of aspects such as wireless communication, packet transmission pattern, and energy model must be taken into account. Despite of numerous types of monitoring data available, this study only considers two types such as periodic, and event-based data. Periodical data will generate monitoring data follows configured interval, while event-based transmit data when certain determined condition is met. Therefore, this study attempts to cover mentioned aspects in NS3. Several simulations are performed with different number of nodes on arbitrary communication scheme.

  18. Ultraviolet to Infrared SED (Spectral Energy Distribution) Analysis of Nearby Late-Stage Mergers

    NASA Astrophysics Data System (ADS)

    Weiner, Aaron S.; Smith, Howard A.; Ashby, Matthew; Martínez-Galarza, Juan Rafael; Ramos Padilla, Andres; Hung, Chao-Ling; Dietrich, Jeremy; Lanz, Lauranne; Hayward, Christopher; Rosenthal, Lee; Willner, Steven; Zezas, Andreas

    2018-01-01

    We present an analysis of the fundamental properties of nearby merging galaxies based on an in-depth analysis of their spectral energy distributions. The Late-Stage Interacting Galaxy Sample (LSIGS) cross-correlates the Revised IRAS-FSC Redshift Catalogue (Wang et al. 2014) with Galaxy Zoo (Lintott et al. 2008, 2011). LSIGS builds on and extends SIGS (Spitzer Interacting Galaxy Sample; Lanz et al. 2013, Brassington et al. 2015) in two ways. First it enlarges the sample considerably to 453 systems, increasing the statistical power of the analysis significantly. Second, it includes galaxies in the most advanced merger stage, during coalescence, filling a gap in the SIGS sample. We present full ultraviolet (UV) to far-infrared (FIR) aperture photometry for 50 galaxies in this sample, 40 of which are late-stage mergers, selecting based on availability of both UV and SPIRE observations. These have subsequently been fit and analyzed by CIGALE (Code Investigating Galaxy Emission; Burgarella 2005) in order to retrieve key physical properties of the galaxies including star-formation rate (SFR), AGN fraction, dust luminosity, bolometric luminosity, and stellar and gas mass. We use this same analysis on hydrodynamical simulations created with GADGET-3 and using SUNRISE for the radiative transfer. Using the observations in conjunction with the simulations, CIGALE fits the simulated values accurately for fAGN>0.3. Additionally galaxies in the midst of coalescence have significantly increased sSFR compared to both early and late-stage mergers, while finding that the gas mass and alpha significantly increase from early stage mergers to those in coalescence. Furthermore, we find a linear anti-correlation between alpha and both the log(60/100μm) flux, and, more interestingly, the compactness. Lastly we bring forth the idea of using the best fit age of the oldest stars and the folding time of the stellar population, τmain, in conjunction to predict the likelihood of a galaxy being in a late-stage merger or in the midst of coalescence.

  19. Pulsations Induced by Vibrations in Aircraft Engine Two-Stage Pump

    NASA Astrophysics Data System (ADS)

    Gafurov, S. A.; Salmina, V. A.; Handroos, H.

    2018-01-01

    This paper describes a phenomenon of induced pressure pulsations inside a two-stage aircraft engine pump. A considered pumps consists of a screw-centrifugal and gear stages. The paper describes the cause of two-stage pump elements loading. A number of hypothesis of pressure pulsations generation inside a pump were considered. The main focus in this consideration is made on phenomena that are not related to pump mode of operation. Provided analysis has shown that pump vibrations as well as pump elements self-oscillations are the main causes that lead to trailing vortices generation. Analysis was conducted by means FEM and CFD simulations as well by means of experimental investigations to obtain natural frequencies and flow structure inside a screw-centrifugal stage. To perform accurate simulations adequate boundary conditions were considered. Cavitation and turbulence phenomena have been also taken into account. Obtained results have shown generated trailing vortices lead to high-frequency loading of the impeller of screw-centrifugal stage and can be a cause of the bearing damage.

  20. Methods for Estimating Kidney Disease Stage Transition Probabilities Using Electronic Medical Records

    PubMed Central

    Luo, Lola; Small, Dylan; Stewart, Walter F.; Roy, Jason A.

    2013-01-01

    Chronic diseases are often described by stages of severity. Clinical decisions about what to do are influenced by the stage, whether a patient is progressing, and the rate of progression. For chronic kidney disease (CKD), relatively little is known about the transition rates between stages. To address this, we used electronic health records (EHR) data on a large primary care population, which should have the advantage of having both sufficient follow-up time and sample size to reliably estimate transition rates for CKD. However, EHR data have some features that threaten the validity of any analysis. In particular, the timing and frequency of laboratory values and clinical measurements are not determined a priori by research investigators, but rather, depend on many factors, including the current health of the patient. We developed an approach for estimating CKD stage transition rates using hidden Markov models (HMMs), when the level of information and observation time vary among individuals. To estimate the HMMs in a computationally manageable way, we used a “discretization” method to transform daily data into intervals of 30 days, 90 days, or 180 days. We assessed the accuracy and computation time of this method via simulation studies. We also used simulations to study the effect of informative observation times on the estimated transition rates. Our simulation results showed good performance of the method, even when missing data are non-ignorable. We applied the methods to EHR data from over 60,000 primary care patients who have chronic kidney disease (stage 2 and above). We estimated transition rates between six underlying disease states. The results were similar for men and women. PMID:25848580

  1. DEMONSTRATION OF THE NEXT-GENERATION CAUSTIC-SIDE SOLVENT EXTRACTION SOLVENT WITH 2-CM CENTRIFUGAL CONTRACTORS USING TANK 49H WASTE AND WASTE SIMULANT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pierce, R.; Peters, T.; Crowder, M.

    2011-09-27

    Researchers successfully demonstrated the chemistry and process equipment of the Caustic-Side Solvent Extraction (CSSX) flowsheet using MaxCalix for the decontamination of high level waste (HLW). The demonstration was completed using a 12-stage, 2-cm centrifugal contactor apparatus at the Savannah River National Laboratory (SRNL). This represents the first CSSX process demonstration of the MaxCalix solvent system with Savannah River Site (SRS) HLW. Two tests lasting 24 and 27 hours processed non-radioactive simulated Tank 49H waste and actual Tank 49H HLW, respectively. Conclusions from this work include the following. The CSSX process is capable of reducing {sup 137}Cs in high level radioactivemore » waste by a factor of more than 40,000 using five extraction, two scrub, and five strip stages. Tests demonstrated extraction and strip section stage efficiencies of greater than 93% for the Tank 49H waste test and greater than 88% for the simulant waste test. During a test with HLW, researchers processed 39 liters of Tank 49H solution and the waste raffinate had an average decontamination factor (DF) of 6.78E+04, with a maximum of 1.08E+05. A simulant waste solution ({approx}34.5 liters) with an initial Cs concentration of 83.1 mg/L was processed and had an average DF greater than 5.9E+03, with a maximum DF of greater than 6.6E+03. The difference may be attributable to differences in contactor stage efficiencies. Test results showed the solvent can be stripped of cesium and recycled for {approx}25 solvent turnovers without the occurrence of any measurable solvent degradation or negative effects from minor components. Based on the performance of the 12-stage 2-cm apparatus with the Tank 49H HLW, the projected DF for MCU with seven extraction, two scrub, and seven strip stages operating at a nominal efficiency of 90% is {approx}388,000. At 95% stage efficiency, the DF in MCU would be {approx}3.2 million. Carryover of organic solvent in aqueous streams (and aqueous in organic streams) was less than 0.1% when processing Tank 49H HLW. The entrained solvent concentration measured in the decontaminated salt solution (DSS) was as much as {approx}140 mg/L, although that value may be overstated by as much as 50% due to modifier solubility in the DSS. The entrained solvent concentration was measured in the strip effluent (SE) and the results are pending. A steady-state concentration factor (CF) of 15.9 was achieved with Tank 49H HLW. Cesium distribution ratios [D(Cs)] were measured with non-radioactive Tank 49H waste simulant and actual Tank 49H waste. Below is a comparison of D(Cs) values of ESS and 2-cm tests. Batch Extraction-Strip-Scrub (ESS) tests yielded D(Cs) values for extraction of {approx}81-88 for tests with Tank 49H waste and waste simulant. The results from the 2-cm contactor tests were in agreement with values of 58-92 for the Tank 49H HLW test and 54-83 for the simulant waste test. These values are consistent with the reference D(Cs) for extraction of {approx}60. In tests with Tank 49H waste and waste simulant, batch ESS tests measured D(Cs) values for the two scrub stages as {approx}3.5-5.0 for the first scrub stage and {approx}1.0-3.0 for the second scrub stage. In the Tank 49H test, the D(Cs) values for the 2-cm test were far from the ESS values. A D(Cs) value of 161 was measured for the first scrub stage and 10.8 for the second scrub stage. The data suggest that the scrub stage is not operating as effectively as intended. For the simulant test, a D(Cs) value of 1.9 was measured for the first scrub stage; the sample from the second scrub stage was compromised. Measurements of the pH of all stage samples for the Tank 49H test showed that the pH for extraction and scrub stages was 14 and the pH for the strip stages was {approx}7. It is expected that the pH of the second scrub stage would be {approx}12-13. Batch ESS tests measured D(Cs) values for the strip stages to be {approx}0.002-0.010. A high value in Strip No.3 of a test with simulant solution has been attributed to issues associated with the limits of detection for the analytical method. In the 2-cm contactor tests, the first four strip stages of the Tank 49H waste test and all five strip stages in the simulant waste test had higher values than the ESS tests. Only the fifth strip stage D(Cs) value of the Tank 49H waste test matched that of the ESS tests. It is speculated that the less-than-optimal performance of the strip section is caused by inefficiencies in the scrub section. Because strip is sensitive to pH, the elevated pH value in the second scrub stage may be the cause of strip performance. In spite of the D(Cs) values obtained in the scrub and strip sections, testing showed that the solvent system is robust. Average DFs for the process far exceeded targets even though the scrub and strip stages did not function optimally. Correction of the issue in the scrub and strip stages is expected to yield even higher waste DFs.« less

  2. A pollen-based biome reconstruction over the last 3.562 million years in the Far East Russian Arctic - new insights into climate-vegetation relationships at the regional scale

    NASA Astrophysics Data System (ADS)

    Tarasov, P. E.; Andreev, A. A.; Anderson, P. M.; Lozhkin, A. V.; Leipe, C.; Haltia, E.; Nowaczyk, N. R.; Wennrich, V.; Brigham-Grette, J.; Melles, M.

    2013-12-01

    The recent and fossil pollen data obtained under the frame of the multi-disciplinary international El'gygytgyn Drilling Project represent a unique archive, which allows the testing of a range of pollen-based reconstruction approaches and the deciphering of changes in the regional vegetation and climate. In the current study we provide details of the biome reconstruction method applied to the late Pliocene and Quaternary pollen records from Lake El'gygytgyn. All terrestrial pollen taxa identified in the spectra from Lake El'gygytgyn were assigned to major vegetation types (biomes), which today occur near the lake and in the broader region of eastern and northern Asia and, thus, could be potentially present in this region during the past. When applied to the pollen spectra from the middle Pleistocene to present, the method suggests (1) a predominance of tundra during the Holocene, (2) a short interval during the marine isotope stage (MIS) 5.5 interglacial distinguished by cold deciduous forest, and (3) long phases of taiga dominance during MIS 31 and, particularly, MIS 11.3. These two latter interglacials seem to be some of the longest and warmest intervals in the study region within the past million years. During the late Pliocene-early Pleistocene interval (i.e., ~3.562-2.200 Ma), there is good correspondence between the millennial-scale vegetation changes documented in the Lake El'gygytgyn record and the alternation of cold and warm marine isotope stages, which reflect changes in the global ice volume and sea level. The biome reconstruction demonstrates changes in the regional vegetation from generally warmer/wetter environments of the earlier (i.e., Pliocene) interval towards colder/drier environments of the Pleistocene. The reconstruction indicates that the taxon-rich cool mixed and cool conifer forest biomes are mostly characteristic of the time prior to MIS G16, whereas the tundra biome becomes a prominent feature starting from MIS G6. These results consistently indicate that the study region supported significant tree populations during most of the interval prior to ~2.730 Ma. The cold- and drought-tolerant steppe biome first appears in the reconstruction ~3.298 Ma during the tundra-dominated MIS M2, whereas the tundra biome initially occurs between ~3.379 and ~3.378 Ma within MIS MG4. Prior to ~2.800 Ma, several other cold stages during this generally warm Pliocene interval were characterized by the tundra biome.

  3. Magnetostratigraphy susceptibility for the Guadalupian Series GSSPs (Middle Permian) in Guadalupe Mountains National Park and adjacent areas in West Texas

    USGS Publications Warehouse

    Wardlaw, Bruce R.; Ellwood, Brooks B.; Lambert, Lance L.; Tomkin, Jonathan H.; Bell, Gordon L.; Nestell, Galina P.

    2012-01-01

    Here we establish a magnetostratigraphy susceptibility zonation for the three Middle Permian Global boundary Stratotype Sections and Points (GSSPs) that have recently been defined, located in Guadalupe Mountains National Park, West Texas, USA. These GSSPs, all within the Middle Permian Guadalupian Series, define (1) the base of the Roadian Stage (base of the Guadalupian Series), (2) the base of the Wordian Stage and (3) the base of the Capitanian Stage. Data from two additional stratigraphic successions in the region, equivalent in age to the Kungurian–Roadian and Wordian–Capitanian boundary intervals, are also reported. Based on low-field, mass specific magnetic susceptibility (χ) measurements of 706 closely spaced samples from these stratigraphic sections and time-series analysis of one of these sections, we (1) define the magnetostratigraphy susceptibility zonation for the three Guadalupian Series Global boundary Stratotype Sections and Points; (2) demonstrate that χ datasets provide a proxy for climate cyclicity; (3) give quantitative estimates of the time it took for some of these sediments to accumulate; (4) give the rates at which sediments were accumulated; (5) allow more precise correlation to equivalent sections in the region; (6) identify anomalous stratigraphic horizons; and (7) give estimates for timing and duration of geological events within sections.

  4. Treatment selection in a randomized clinical trial via covariate-specific treatment effect curves.

    PubMed

    Ma, Yunbei; Zhou, Xiao-Hua

    2017-02-01

    For time-to-event data in a randomized clinical trial, we proposed two new methods for selecting an optimal treatment for a patient based on the covariate-specific treatment effect curve, which is used to represent the clinical utility of a predictive biomarker. To select an optimal treatment for a patient with a specific biomarker value, we proposed pointwise confidence intervals for each covariate-specific treatment effect curve and the difference between covariate-specific treatment effect curves of two treatments. Furthermore, to select an optimal treatment for a future biomarker-defined subpopulation of patients, we proposed confidence bands for each covariate-specific treatment effect curve and the difference between each pair of covariate-specific treatment effect curve over a fixed interval of biomarker values. We constructed the confidence bands based on a resampling technique. We also conducted simulation studies to evaluate finite-sample properties of the proposed estimation methods. Finally, we illustrated the application of the proposed method in a real-world data set.

  5. A contribution to regional stratigraphic correlations of the Afro-Brazilian depression - The Dom João Stage (Brotas Group and equivalent units - Late Jurassic) in Northeastern Brazilian sedimentary basins

    NASA Astrophysics Data System (ADS)

    Kuchle, Juliano; Scherer, Claiton Marlon dos Santos; Born, Christian Correa; Alvarenga, Renata dos Santos; Adegas, Felipe

    2011-04-01

    The Dom João Stage comprises an interval with variable thickness between 100 and 1200 m, composed of fluvial, eolian and lacustrine deposits of Late Jurassic age, based mainly on the lacustrine ostracod fauna (although the top deposits may extend into the Early Cretaceous). These deposits comprise the so-called Afro-Brazilian Depression, initially characterized as containing the Brotas Group of the Recôncavo Basin (which includes the Aliança and the Sergi Formations) and subsequently extended into the Tucano, Jatobá, Camamu, Almada, Sergipe, Alagoas and Araripe Basins in northeastern Brazil, encompassing the study area of this paper. The large occurrence area of the Dom João Stage gives rise to discussions about the depositional connectivity between the basins, and the real extension of sedimentation. In the first studies of this stratigraphic interval, the Dom João Stage was strictly associated with the rift phase, as an initial stage (decades of 1960-70), but subsequent analyses considered the Dom João as an intracratonic basin or pre-rift phase - without any relation to the active mechanics of a tectonic syn-rift phase (decades of 1980-2000). The present work developed an evolutionary stratigraphic and tectonic model, based on the characterization of depositional sequences, internal flooding surfaces, depositional systems arrangement and paleoflow directions. Several outcrops on the onshore basins were used to build composite sections of each basin, comprising facies, architectural elements, depositional systems, stratigraphic and lithostratigraphic frameworks, and paleocurrents. In addition to that, over a hundred onshore and offshore exploration wells were used (only 21 of which are showed) to map the depositional sequences and generate correlation sections. These show the characteristics and relations of the Dom João Stage in each studied basin, and they were also extended to the Gabon Basin. The results indicate that there were two main phases during the Dom João Stage, in which distinctive sedimentary environments were developed, reflecting depositional system arrangements, paleoflow directions were diverse, and continuous or compartmented basins were developed.

  6. Predicting the effectiveness of depth-based technologies to prevent salmon lice infection using a dispersal model.

    PubMed

    Samsing, Francisca; Johnsen, Ingrid; Stien, Lars Helge; Oppedal, Frode; Albretsen, Jon; Asplin, Lars; Dempster, Tim

    2016-07-01

    Salmon lice is one of the major parasitic problems affecting wild and farmed salmonid species. The planktonic larval stages of these marine parasites can survive for extended periods without a host and are transported long distances by water masses. Salmon lice larvae have limited swimming capacity, but can influence their horizontal transport by vertical positioning. Here, we adapted a coupled biological-physical model to calculate the distribution of farm-produced salmon lice (Lepeophtheirus salmonis) during winter in the southwest coast of Norway. We tested 4 model simulations to see which best represented empirical data from two sources: (1) observed lice infection levels reported by farms; and (2) experimental data from a vertical exposure experiment where fish were forced to swim at different depths with a lice-barrier technology. Model simulations tested were different development time to the infective stage (35 or 50°-days), with or without the presence of temperature-controlled vertical behaviour of lice early planktonic stages (naupliar stages). The best model fit occurred with a 35°-day development time to the infective stage, and temperature-controlled vertical behaviour. We applied this model to predict the effectiveness of depth-based preventive lice-barrier technologies. Both simulated and experimental data revealed that hindering fish from swimming close to the surface efficiently reduced lice infection. Moreover, while our model simulation predicted that this preventive technology is widely applicable, its effectiveness will depend on environmental conditions. Low salinity surface waters reduce the effectiveness of this technology because salmon lice avoid these conditions, and can encounter the fish as they sink deeper in the water column. Correctly parameterized and validated salmon lice dispersal models can predict the impact of preventive approaches to control this parasite and become an essential tool in lice management strategies. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Study on system dynamics of evolutionary mix-game models

    NASA Astrophysics Data System (ADS)

    Gou, Chengling; Guo, Xiaoqian; Chen, Fang

    2008-11-01

    Mix-game model is ameliorated from an agent-based MG model, which is used to simulate the real financial market. Different from MG, there are two groups of agents in Mix-game: Group 1 plays a majority game and Group 2 plays a minority game. These two groups of agents have different bounded abilities to deal with historical information and to count their own performance. In this paper, we modify Mix-game model by assigning the evolution abilities to agents: if the winning rates of agents are smaller than a threshold, they will copy the best strategies the other agent has; and agents will repeat such evolution at certain time intervals. Through simulations this paper finds: (1) the average winning rates of agents in Group 1 and the mean volatilities increase with the increases of the thresholds of Group 1; (2) the average winning rates of both groups decrease but the mean volatilities of system increase with the increase of the thresholds of Group 2; (3) the thresholds of Group 2 have greater impact on system dynamics than the thresholds of Group 1; (4) the characteristics of system dynamics under different time intervals of strategy change are similar to each other qualitatively, but they are different quantitatively; (5) As the time interval of strategy change increases from 1 to 20, the system behaves more and more stable and the performances of agents in both groups become better also.

  8. Characteristic pulse trains of preliminary breakdown in four isolated small thunderstorms

    NASA Astrophysics Data System (ADS)

    Ma, Dong

    2017-03-01

    Using a low-frequency six-station local network, preliminary breakdown (PB) pulses not followed or followed by negative return stroke (RS), which are defined as PB-type and PB cloud-to-ground (PBCG)-type flashes, are analyzed based on four isolated small thunderstorms for the first time. On the basis of 22 PB-type flashes out of totally 2155 flashes, it indicates that the number of PB-type flashes is very small. At the early stage, PB-type flashes are observed in all four thunderstorms. At the active stage, PB-type flashes still can occur; meanwhile, there are few or no negative cloud-to-ground (CG) flashes. However, at the final stage no PB-type flashes occur. At the stage of distinct cell merging or splitting, PB-type flashes are also observed. Based on the 123 PBCG-type flashes, we discuss the percentage of PBCG-type flashes and also analyze the relationship between the electric field (E-field) amplitude of the largest pulse in the PB pulse train normalized to 100 km (PBA), the E-field amplitude of the first return stroke normalized to 100 km (RSA), the time interval between PBA and RSA (PB-RS interval), and the ratio between PBA and RSA (PB-RS ratio). We find that the percentage of PBCG-type flashes is not always dependent on PBA or PB-RS ratio; the type of thunderstorms may also have an impact on this percentage. None of the PB-RS intervals is less than 20 ms; we speculate that such long PB-RS interval is the feature of isolated small thunderstorms, but more observations are needed to further investigate this question.

  9. An agent-based model for emotion contagion and competition in online social media

    NASA Astrophysics Data System (ADS)

    Fan, Rui; Xu, Ke; Zhao, Jichang

    2018-04-01

    Recent studies suggest that human emotions diffuse in not only real-world communities but also online social media. However, a comprehensive model that considers up-to-date findings and multiple online social media mechanisms is still missing. To bridge this vital gap, an agent-based model, which concurrently considers emotion influence and tie strength preferences, is presented to simulate the emotion contagion and competition. Our model well reproduces patterns observed in the empirical data, like anger's preference on weak ties, anger-dominated users' high vitalities and angry tweets' short retweet intervals, and anger's competitiveness in negative events. The comparison with a previously presented baseline model further demonstrates its effectiveness in modeling online emotion contagion. It is also surprisingly revealed by our model that as the ratio of anger approaches joy with a gap less than 12%, anger will eventually dominate the online social media and arrives the collective outrage in the cyber space. The critical gap disclosed here can be indeed warning signals at early stages for outrage control. Our model would shed lights on the study of multiple issues regarding emotion contagion and competition in terms of computer simulations.

  10. A generalized weight-based particle-in-cell simulation scheme

    NASA Astrophysics Data System (ADS)

    Lee, W. W.; Jenkins, T. G.; Ethier, S.

    2011-03-01

    A generalized weight-based particle simulation scheme suitable for simulating magnetized plasmas, where the zeroth-order inhomogeneity is important, is presented. The scheme is an extension of the perturbative simulation schemes developed earlier for particle-in-cell (PIC) simulations. The new scheme is designed to simulate both the perturbed distribution ( δf) and the full distribution (full- F) within the same code. The development is based on the concept of multiscale expansion, which separates the scale lengths of the background inhomogeneity from those associated with the perturbed distributions. The potential advantage for such an arrangement is to minimize the particle noise by using δf in the linear stage of the simulation, while retaining the flexibility of a full- F capability in the fully nonlinear stage of the development when signals associated with plasma turbulence are at a much higher level than those from the intrinsic particle noise.

  11. A note on the kappa statistic for clustered dichotomous data.

    PubMed

    Zhou, Ming; Yang, Zhao

    2014-06-30

    The kappa statistic is widely used to assess the agreement between two raters. Motivated by a simulation-based cluster bootstrap method to calculate the variance of the kappa statistic for clustered physician-patients dichotomous data, we investigate its special correlation structure and develop a new simple and efficient data generation algorithm. For the clustered physician-patients dichotomous data, based on the delta method and its special covariance structure, we propose a semi-parametric variance estimator for the kappa statistic. An extensive Monte Carlo simulation study is performed to evaluate the performance of the new proposal and five existing methods with respect to the empirical coverage probability, root-mean-square error, and average width of the 95% confidence interval for the kappa statistic. The variance estimator ignoring the dependence within a cluster is generally inappropriate, and the variance estimators from the new proposal, bootstrap-based methods, and the sampling-based delta method perform reasonably well for at least a moderately large number of clusters (e.g., the number of clusters K ⩾50). The new proposal and sampling-based delta method provide convenient tools for efficient computations and non-simulation-based alternatives to the existing bootstrap-based methods. Moreover, the new proposal has acceptable performance even when the number of clusters is as small as K = 25. To illustrate the practical application of all the methods, one psychiatric research data and two simulated clustered physician-patients dichotomous data are analyzed. Copyright © 2014 John Wiley & Sons, Ltd.

  12. A CellML simulation compiler and code generator using ODE solving schemes

    PubMed Central

    2012-01-01

    Models written in description languages such as CellML are becoming a popular solution to the handling of complex cellular physiological models in biological function simulations. However, in order to fully simulate a model, boundary conditions and ordinary differential equation (ODE) solving schemes have to be combined with it. Though boundary conditions can be described in CellML, it is difficult to explicitly specify ODE solving schemes using existing tools. In this study, we define an ODE solving scheme description language-based on XML and propose a code generation system for biological function simulations. In the proposed system, biological simulation programs using various ODE solving schemes can be easily generated. We designed a two-stage approach where the system generates the equation set associating the physiological model variable values at a certain time t with values at t + Δt in the first stage. The second stage generates the simulation code for the model. This approach enables the flexible construction of code generation modules that can support complex sets of formulas. We evaluate the relationship between models and their calculation accuracies by simulating complex biological models using various ODE solving schemes. Using the FHN model simulation, results showed good qualitative and quantitative correspondence with the theoretical predictions. Results for the Luo-Rudy 1991 model showed that only first order precision was achieved. In addition, running the generated code in parallel on a GPU made it possible to speed up the calculation time by a factor of 50. The CellML Compiler source code is available for download at http://sourceforge.net/projects/cellmlcompiler. PMID:23083065

  13. Improving a stage forecasting Muskingum model by relating local stage and remote discharge

    NASA Astrophysics Data System (ADS)

    Barbetta, S.; Moramarco, T.; Melone, F.; Brocca, L.

    2009-04-01

    Following the parsimonious concept of parameters, simplified models for flood forecasting based only on flood routing have been developed for flood-prone sites located downstream of a gauged station and at a distance allowing an appropriate forecasting lead-time. In this context, the Muskingum model can be a useful tool. However, critical points in hydrological routing are the representation of lateral inflows contribution and the knowledge of stage-discharge relationships. As regards the former, O'Donnell (O'Donnell, T., 1985. A direct three-parameter Muskingum procedure incorporating lateral inflow, Hydrol. Sci. J., 30[4/12], 479-496) proposed a three-parameter Muskingum procedure assuming the lateral inflows proportional to the contribution entering upstream. Using this approach, Franchini and Lamberti (Franchini, M. & Lamberti, P., 1994. A flood routing Muskingum type simulation and forecasting model based on level data alone, Water Resour. Res., 30[7], 2183-2196) presented a simple model Muskingum type to provide forecast water levels at the downstream end by selecting a routing time interval and, hence, a forecasting lead-time allowing to express the forecast stage as a function of only observed quantities. Moramarco et al. (Moramarco, T., Barbetta, S., Melone, F. & Singh, V.P., 2006. A real-time stage Muskingum forecasting model for a site without rating curve, Hydrol. Sci. J., 51[1], 66-82) enhanced the modeling scheme incorporating a procedure for adapting the parameter linked to lateral inflows. This last model, called STAFOM (STAge FOrecasting Model), was also extended to a two connected river branches schematization in order to improve significantly the forecasting lead-time. The STAFOM model provided satisfactory results for most of the analysed flood events observed in different river reaches in the Upper-Middle Tiber River basin in Central Italy. However, the analysis highlighted that the stage forecast should be enhanced when sudden modifications occur in the upstream and downstream hydrographs recorded in real-time. Moramarco et al. (Moramarco, T., Barbetta, S., F. Melone, F. & Singh, V.P., 2005. Relating local stage and remote discharge with significant lateral inflow, J. Hydrol. Engng ASCE, 10[1], 58-69) showed that for any flood condition at ends of a river reach, a direct proportionality between the upstream and downstream mean velocity can be found. This insight was the basis for developing the Rating Curve Model (RCM) that allows to also accommodate significant lateral inflow contributions, permitting, without using a flood routing procedure and without the need of a rating curve at a local site, to relate the local hydraulic conditions with those at a remote gauged section. Therefore, to improve the STAFOM performance mainly for highly varying flood conditions, the model has been here modified by coupling it with a procedure based on the RCM approach. Several flood events occurred along different equipped river reaches of the Upper Tiber River basin have been used as case study. Results showed that the new model, named STAFOM-RCM, apart from to improve the stage forecast accuracy in terms of error on peak stage, Nash-Sutcliffe efficiency coefficient and the coefficient of persistence, allowed to use a larger lead time thus avoiding the two-river branches cascade schematization where fluctuations in stage forecasting occur more frequently.

  14. Causal Mediation Analysis for the Cox Proportional Hazards Model with a Smooth Baseline Hazard Estimator.

    PubMed

    Wang, Wei; Albert, Jeffrey M

    2017-08-01

    An important problem within the social, behavioral, and health sciences is how to partition an exposure effect (e.g. treatment or risk factor) among specific pathway effects and to quantify the importance of each pathway. Mediation analysis based on the potential outcomes framework is an important tool to address this problem and we consider the estimation of mediation effects for the proportional hazards model in this paper. We give precise definitions of the total effect, natural indirect effect, and natural direct effect in terms of the survival probability, hazard function, and restricted mean survival time within the standard two-stage mediation framework. To estimate the mediation effects on different scales, we propose a mediation formula approach in which simple parametric models (fractional polynomials or restricted cubic splines) are utilized to approximate the baseline log cumulative hazard function. Simulation study results demonstrate low bias of the mediation effect estimators and close-to-nominal coverage probability of the confidence intervals for a wide range of complex hazard shapes. We apply this method to the Jackson Heart Study data and conduct sensitivity analysis to assess the impact on the mediation effects inference when the no unmeasured mediator-outcome confounding assumption is violated.

  15. Estimated flood-inundation mapping for the Lower Blue River in Kansas City, Missouri, 2003-2005

    USGS Publications Warehouse

    Kelly, Brian P.; Rydlund, Jr., Paul H.

    2006-01-01

    The U.S. Geological Survey, in cooperation with the city of Kansas City, Missouri, began a study in 2003 of the lower Blue River in Kansas City, Missouri, from Gregory Boulevard to the mouth at the Missouri River to determine the estimated extent of flood inundation in the Blue River valley from flooding on the lower Blue River and from Missouri River backwater. Much of the lower Blue River flood plain is covered by industrial development. Rapid development in the upper end of the watershed has increased the volume of runoff, and thus the discharge of flood events for the Blue River. Modifications to the channel of the Blue River began in late 1983 in response to the need for flood control. By 2004, the channel had been widened and straightened from the mouth to immediately downstream from Blue Parkway to convey a 30-year flood. A two-dimensional depth-averaged flow model was used to simulate flooding within a 2-mile study reach of the Blue River between 63rd Street and Blue Parkway. Hydraulic simulation of the study reach provided information for the design and performance of proposed hydraulic structures and channel improvements and for the production of estimated flood-inundation maps and maps representing an areal distribution of water velocity, both magnitude and direction. Flood profiles of the Blue River were developed between Gregory Boulevard and 63rd Street from stage elevations calculated from high water marks from the flood of May 19, 2004; between 63rd Street and Blue Parkway from two-dimensional hydraulic modeling conducted for this study; and between Blue Parkway and the mouth from an existing one-dimensional hydraulic model by the U.S. Army Corps of Engineers. Twelve inundation maps were produced at 2-foot intervals for Blue Parkway stage elevations from 750 to 772 feet. Each map is associated with National Weather Service flood-peak forecast locations at 63rd Street, Blue Parkway, Stadium Drive, U.S. Highway 40, 12th Street, and the Missouri River at the Hannibal railroad bridge in Kansas City. The National Weather Service issues peak-stage forecasts for these locations during times of flooding. Missouri River backwater inundation profiles were developed using interpolated Missouri River stage elevations at the mouth of the Blue River. Twelve backwater-inundation maps were produced at 2-foot intervals for the mouth of the Blue River from 730.9 to 752.9. To provide public access to the information presented in this report, a World Wide Web site (http://mo.water.usgs.gov/indep/kelly/blueriver/index.htm) was created that displays the results of two-dimensional modeling between 63rd Street and Blue Parkway, estimated flood-inundation maps, estimated backwater-inundation maps, and the latest gage heights and National Weather Service stage forecast for each forecast location within the study area. In addition, the full text of this report, all tables, and all plates are available for download at http://pubs.water.usgs.gov/sir2006-5089.

  16. Micro finite element analysis of dental implants under different loading conditions.

    PubMed

    Marcián, Petr; Wolff, Jan; Horáčková, Ladislava; Kaiser, Jozef; Zikmund, Tomáš; Borák, Libor

    2018-05-01

    Osseointegration is paramount for the longevity of dental implants and is significantly influenced by biomechanical stimuli. The aim of the present study was to assess the micro-strain and displacement induced by loaded dental implants at different stages of osseointegration using finite element analysis (FEA). Computational models of two mandible segments with different trabecular densities were constructed using microCT data. Three different implant loading directions and two osseointegration stages were considered in the stress-strain analysis of the bone-implant assembly. The bony segments were analyzed using two approaches. The first approach was based on Mechanostat strain intervals and the second approach was based on tensile/compression yield strains. The results of this study revealed that bone surrounding dental implants is critically strained in cases when only a partial osseointegration is present and when an implant is loaded by buccolingual forces. In such cases, implants also encounter high stresses. Displacements of partially-osseointegrated implant are significantly larger than those of fully-osseointegrated implants. It can be concluded that the partial osseointegration is a potential risk in terms of implant longevity. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Serial recall of colors: Two models of memory for serial order applied to continuous visual stimuli.

    PubMed

    Peteranderl, Sonja; Oberauer, Klaus

    2018-01-01

    This study investigated the effects of serial position and temporal distinctiveness on serial recall of simple visual stimuli. Participants observed lists of five colors presented at varying, unpredictably ordered interitem intervals, and their task was to reproduce the colors in their order of presentation by selecting colors on a continuous-response scale. To control for the possibility of verbal labeling, articulatory suppression was required in one of two experimental sessions. The predictions were derived through simulation from two computational models of serial recall: SIMPLE represents the class of temporal-distinctiveness models, whereas SOB-CS represents event-based models. According to temporal-distinctiveness models, items that are temporally isolated within a list are recalled more accurately than items that are temporally crowded. In contrast, event-based models assume that the time intervals between items do not affect recall performance per se, although free time following an item can improve memory for that item because of extended time for the encoding. The experimental and the simulated data were fit to an interference measurement model to measure the tendency to confuse items with other items nearby on the list-the locality constraint-in people as well as in the models. The continuous-reproduction performance showed a pronounced primacy effect with no recency, as well as some evidence for transpositions obeying the locality constraint. Though not entirely conclusive, this evidence favors event-based models over a role for temporal distinctiveness. There was also a strong detrimental effect of articulatory suppression, suggesting that verbal codes can be used to support serial-order memory of simple visual stimuli.

  18. Aerodynamic Design Study of Advanced Multistage Axial Compressor

    NASA Technical Reports Server (NTRS)

    Larosiliere, Louis M.; Wood, Jerry R.; Hathaway, Michael D.; Medd, Adam J.; Dang, Thong Q.

    2002-01-01

    As a direct response to the need for further performance gains from current multistage axial compressors, an investigation of advanced aerodynamic design concepts that will lead to compact, high-efficiency, and wide-operability configurations is being pursued. Part I of this report describes the projected level of technical advancement relative to the state of the art and quantifies it in terms of basic aerodynamic technology elements of current design systems. A rational enhancement of these elements is shown to lead to a substantial expansion of the design and operability space. Aerodynamic design considerations for a four-stage core compressor intended to serve as a vehicle to develop, integrate, and demonstrate aerotechnology advancements are discussed. This design is biased toward high efficiency at high loading. Three-dimensional blading and spanwise tailoring of vector diagrams guided by computational fluid dynamics (CFD) are used to manage the aerodynamics of the high-loaded endwall regions. Certain deleterious flow features, such as leakage-vortex-dominated endwall flow and strong shock-boundary-layer interactions, were identified and targeted for improvement. However, the preliminary results were encouraging and the front two stages were extracted for further aerodynamic trimming using a three-dimensional inverse design method described in part II of this report. The benefits of the inverse design method are illustrated by developing an appropriate pressure-loading strategy for transonic blading and applying it to reblade the rotors in the front two stages of the four-stage configuration. Multistage CFD simulations based on the average passage formulation indicated an overall efficiency potential far exceeding current practice for the front two stages. Results of the CFD simulation at the aerodynamic design point are interrogated to identify areas requiring additional development. In spite of the significantly higher aerodynamic loadings, advanced CFD-based tools were able to effectively guide the design of a very efficient axial compressor under state-of-the-art aeromechanical constraints.

  19. Hierarchical animal movement models for population-level inference

    USGS Publications Warehouse

    Hooten, Mevin B.; Buderman, Frances E.; Brost, Brian M.; Hanks, Ephraim M.; Ivans, Jacob S.

    2016-01-01

    New methods for modeling animal movement based on telemetry data are developed regularly. With advances in telemetry capabilities, animal movement models are becoming increasingly sophisticated. Despite a need for population-level inference, animal movement models are still predominantly developed for individual-level inference. Most efforts to upscale the inference to the population level are either post hoc or complicated enough that only the developer can implement the model. Hierarchical Bayesian models provide an ideal platform for the development of population-level animal movement models but can be challenging to fit due to computational limitations or extensive tuning required. We propose a two-stage procedure for fitting hierarchical animal movement models to telemetry data. The two-stage approach is statistically rigorous and allows one to fit individual-level movement models separately, then resample them using a secondary MCMC algorithm. The primary advantages of the two-stage approach are that the first stage is easily parallelizable and the second stage is completely unsupervised, allowing for an automated fitting procedure in many cases. We demonstrate the two-stage procedure with two applications of animal movement models. The first application involves a spatial point process approach to modeling telemetry data, and the second involves a more complicated continuous-time discrete-space animal movement model. We fit these models to simulated data and real telemetry data arising from a population of monitored Canada lynx in Colorado, USA.

  20. Simulating transport of nitrogen and phosphorus in a Cambisol after natural and simulated intense rainfall.

    PubMed

    Kaufmann, Vander; Pinheiro, Adilson; Castro, Nilza Maria dos Reis

    2014-05-01

    Intense rainfall adversely affects agricultural areas, causing transport of pollutants. Physically-based hydrological models to simulate flows of water and chemical substances can be used to help decision-makers adopt measures which reduce such problems. The purpose of this paper is to evaluate the performance of SWAP and ANIMO models for simulating transport of water, nitrate and phosphorus nutrients, during intense rainfall events generated by a simulator, and during natural rainfall, on a volumetric drainage lysimeter. The models were calibrated and verified using daily time series and simulated rainfall measured at 10-minute intervals. For daily time-intervals, the Nash-Sutcliffe coefficient was 0.865 for the calibration period and 0.805 for verification. Under simulated rainfall, these coefficients were greater than 0.56. The pattern of both nitrate and phosphate concentrations in daily drainage flow under simulated rainfall was acceptably reproduced by the ANIMO model. In the simulated rainfall, loads of nitrate transported in surface runoff varied between 0.08 and 8.46 kg ha(-1), and in drainage form the lysimeter, between 2.44 and 112.57 kg ha(-1). In the case of phosphate, the loads transported in surface runoff varied between 0.002 and 0.504 kg ha(-1), and in drainage, between 0.005 and 1.107 kg ha(-1). The use of the two models SWAP and ANIMO shows the magnitudes of nitrogen and phosphorus fluxes transported by natural and simulated intense rainfall in an agricultural area with different soil management procedures, as required by decision makers. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. Personalized glucose-insulin model based on signal analysis.

    PubMed

    Goede, Simon L; de Galan, Bastiaan E; Leow, Melvin Khee Shing

    2017-04-21

    Glucose plasma measurements for diabetes patients are generally presented as a glucose concentration-time profile with 15-60min time scale intervals. This limited resolution obscures detailed dynamic events of glucose appearance and metabolism. Measurement intervals of 15min or more could contribute to imperfections in present diabetes treatment. High resolution data from mixed meal tolerance tests (MMTT) for 24 type 1 and type 2 diabetes patients were used in our present modeling. We introduce a model based on the physiological properties of transport, storage and utilization. This logistic approach follows the principles of electrical network analysis and signal processing theory. The method mimics the physiological equivalent of the glucose homeostasis comprising the meal ingestion, absorption via the gastrointestinal tract (GIT) to the endocrine nexus between the liver, pancreatic alpha and beta cells. This model demystifies the metabolic 'black box' by enabling in silico simulations and fitting of individual responses to clinical data. Five-minute intervals MMTT data measured from diabetic subjects result in two independent model parameters that characterize the complete glucose system response at a personalized level. From the individual data measurements, we obtain a model which can be analyzed with a standard electrical network simulator for diagnostics and treatment optimization. The insulin dosing time scale can be accurately adjusted to match the individual requirements of characterized diabetic patients without the physical burden of treatment. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Pei-Chun; Chen, Yen-Ching; Research Center for Gene, Environment, and Human Health, College of Public Health, National Taiwan University, Taiwan

    Purpose: To identify germline polymorphisms to predict concurrent chemoradiation therapy (CCRT) response in esophageal cancer patients. Materials and Methods: A total of 139 esophageal cancer patients treated with CCRT (cisplatin-based chemotherapy combined with 40 Gy of irradiation) and subsequent esophagectomy were recruited at the National Taiwan University Hospital between 1997 and 2008. After excluding confounding factors (i.e., females and patients aged {>=}70 years), 116 patients were enrolled to identify single nucleotide polymorphisms (SNPs) associated with specific CCRT responses. Genotyping arrays and mass spectrometry were used sequentially to determine germline polymorphisms from blood samples. These polymorphisms remain stable throughout disease progression,more » unlike somatic mutations from tumor tissues. Two-stage design and additive genetic models were adopted in this study. Results: From the 26 SNPs identified in the first stage, 2 SNPs were found to be significantly associated with CCRT response in the second stage. Single nucleotide polymorphism rs16863886, located between SGPP2 and FARSB on chromosome 2q36.1, was significantly associated with a 3.93-fold increase in pathologic complete response to CCRT (95% confidence interval 1.62-10.30) under additive models. Single nucleotide polymorphism rs4954256, located in ZRANB3 on chromosome 2q21.3, was associated with a 3.93-fold increase in pathologic complete response to CCRT (95% confidence interval 1.57-10.87). The predictive accuracy for CCRT response was 71.59% with these two SNPs combined. Conclusions: This is the first study to identify germline polymorphisms with a high accuracy for predicting CCRT response in the treatment of esophageal cancer.« less

  3. River water quality management considering agricultural return flows: application of a nonlinear two-stage stochastic fuzzy programming.

    PubMed

    Tavakoli, Ali; Nikoo, Mohammad Reza; Kerachian, Reza; Soltani, Maryam

    2015-04-01

    In this paper, a new fuzzy methodology is developed to optimize water and waste load allocation (WWLA) in rivers under uncertainty. An interactive two-stage stochastic fuzzy programming (ITSFP) method is utilized to handle parameter uncertainties, which are expressed as fuzzy boundary intervals. An iterative linear programming (ILP) is also used for solving the nonlinear optimization model. To accurately consider the impacts of the water and waste load allocation strategies on the river water quality, a calibrated QUAL2Kw model is linked with the WWLA optimization model. The soil, water, atmosphere, and plant (SWAP) simulation model is utilized to determine the quantity and quality of each agricultural return flow. To control pollution loads of agricultural networks, it is assumed that a part of each agricultural return flow can be diverted to an evaporation pond and also another part of it can be stored in a detention pond. In detention ponds, contaminated water is exposed to solar radiation for disinfecting pathogens. Results of applying the proposed methodology to the Dez River system in the southwestern region of Iran illustrate its effectiveness and applicability for water and waste load allocation in rivers. In the planning phase, this methodology can be used for estimating the capacities of return flow diversion system and evaporation and detention ponds.

  4. A two-stage Monte Carlo approach to the expression of uncertainty with finite sample sizes.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crowder, Stephen Vernon; Moyer, Robert D.

    2005-05-01

    Proposed supplement I to the GUM outlines a 'propagation of distributions' approach to deriving the distribution of a measurand for any non-linear function and for any set of random inputs. The supplement's proposed Monte Carlo approach assumes that the distributions of the random inputs are known exactly. This implies that the sample sizes are effectively infinite. In this case, the mean of the measurand can be determined precisely using a large number of Monte Carlo simulations. In practice, however, the distributions of the inputs will rarely be known exactly, but must be estimated using possibly small samples. If these approximatedmore » distributions are treated as exact, the uncertainty in estimating the mean is not properly taken into account. In this paper, we propose a two-stage Monte Carlo procedure that explicitly takes into account the finite sample sizes used to estimate parameters of the input distributions. We will illustrate the approach with a case study involving the efficiency of a thermistor mount power sensor. The performance of the proposed approach will be compared to the standard GUM approach for finite samples using simple non-linear measurement equations. We will investigate performance in terms of coverage probabilities of derived confidence intervals.« less

  5. Modified stochastic fragmentation of an interval as an ageing process

    NASA Astrophysics Data System (ADS)

    Fortin, Jean-Yves

    2018-02-01

    We study a stochastic model based on modified fragmentation of a finite interval. The mechanism consists of cutting the interval at a random location and substituting a unique fragment on the right of the cut to regenerate and preserve the interval length. This leads to a set of segments of random sizes, with the accumulation of small fragments near the origin. This model is an example of record dynamics, with the presence of ‘quakes’ and slow dynamics. The fragment size distribution is a universal inverse power law with logarithmic corrections. The exact distribution for the fragment number as function of time is simply related to the unsigned Stirling numbers of the first kind. Two-time correlation functions are defined, and computed exactly. They satisfy scaling relations, and exhibit aging phenomena. In particular, the probability that the same number of fragments is found at two different times t>s is asymptotically equal to [4πlog(s)]-1/2 when s\\gg 1 and the ratio t/s is fixed, in agreement with the numerical simulations. The same process with a reset impedes the aging phenomenon-beyond a typical time scale defined by the reset parameter.

  6. Pulmonary 3 T MRI with ultrashort TEs: influence of ultrashort echo time interval on pulmonary functional and clinical stage assessments of smokers.

    PubMed

    Ohno, Yoshiharu; Nishio, Mizuho; Koyama, Hisanobu; Yoshikawa, Takeshi; Matsumoto, Sumiaki; Seki, Shinichiro; Obara, Makoto; van Cauteren, Marc; Takahashi, Masaya; Sugimura, Kazuro

    2014-04-01

    To assess the influence of ultrashort TE (UTE) intervals on pulmonary magnetic resonance imaging (MRI) with UTEs (UTE-MRI) for pulmonary functional loss assessment and clinical stage classification of smokers. A total 60 consecutive smokers (43 men and 17 women; mean age 70 years) with and without COPD underwent thin-section multidetector row computed tomography (MDCT), UTE-MRI, and pulmonary functional measurements. For each smoker, UTE-MRI was performed with three different UTE intervals (UTE-MRI A: 0.5 msec, UTE-MRI B: 1.0 msec, UTE-MRI C: 1.5 msec). By using the GOLD guidelines, the subjects were classified as: "smokers without COPD," "mild COPD," "moderate COPD," and "severe or very severe COPD." Then the mean T2* value from each UTE-MRI and CT-based functional lung volume (FLV) were correlated with pulmonary function test. Finally, Fisher's PLSD test was used to evaluate differences in each index among the four clinical stages. Each index correlated significantly with pulmonary function test results (P < 0.05). CT-based FLV and mean T2* values obtained from UTE-MRI A and B showed significant differences among all groups except between "smokers without COPD" and "mild COPD" groups (P < 0.05). UTE-MRI has a potential for management of smokers and the UTE interval is suggested as an important parameter in this setting. Copyright © 2013 Wiley Periodicals, Inc.

  7. Parallel satellite orbital situational problems solver for space missions design and control

    NASA Astrophysics Data System (ADS)

    Atanassov, Atanas Marinov

    2016-11-01

    Solving different scientific problems for space applications demands implementation of observations, measurements or realization of active experiments during time intervals in which specific geometric and physical conditions are fulfilled. The solving of situational problems for determination of these time intervals when the satellite instruments work optimally is a very important part of all activities on every stage of preparation and realization of space missions. The elaboration of universal, flexible and robust approach for situation analysis, which is easily portable toward new satellite missions, is significant for reduction of missions' preparation times and costs. Every situation problem could be based on one or more situation conditions. Simultaneously solving different kinds of situation problems based on different number and types of situational conditions, each one of them satisfied on different segments of satellite orbit requires irregular calculations. Three formal approaches are presented. First one is related to situation problems description that allows achieving flexibility in situation problem assembling and presentation in computer memory. The second formal approach is connected with developing of situation problem solver organized as processor that executes specific code for every particular situational condition. The third formal approach is related to solver parallelization utilizing threads and dynamic scheduling based on "pool of threads" abstraction and ensures a good load balance. The developed situation problems solver is intended for incorporation in the frames of multi-physics multi-satellite space mission's design and simulation tools.

  8. Auditorium acoustics evaluation based on simulated impulse response

    NASA Astrophysics Data System (ADS)

    Wu, Shuoxian; Wang, Hongwei; Zhao, Yuezhe

    2004-05-01

    The impulse responses and other acoustical parameters of Huangpu Teenager Palace in Guangzhou were measured. Meanwhile, the acoustical simulation and auralization based on software ODEON were also made. The comparison between the parameters based on computer simulation and measuring is given. This case study shows that auralization technique based on computer simulation can be used for predicting the acoustical quality of a hall at its design stage.

  9. Myospherulosis following sinus surgery: pathological curiosity or important clinical entity?

    PubMed

    Sindwani, Raj; Cohen, Jacob T; Pilch, Ben Z; Metson, Ralph B

    2003-07-01

    Myospherulosis is a foreign body reaction to lipid material used on nasal packing at the conclusion of sinus surgery. This reaction has been associated with postoperative adhesion formation. The purpose of the study was to determine whether the occurrence of myospherulosis has an adverse effect on clinical outcome following sinus surgery. Case-control study at an academic medical center. Thirty-two cases of myospherulosis were identified in 28 patients (4 with bilateral disease) who underwent sinus surgery between 1989 and 1999. Cases were staged according to histological and radiological grading systems. Clinical outcome was compared with a control group of 28 patients who had similar surgery during the same time period. Patients with myospherulosis were found to have a significantly higher likelihood of developing postoperative adhesions compared with control subjects (50% vs. 18%, respectively [P =.023]). Histological stage, based on the extent of lipid vacuoles and spherules (erythrocyte remnants) present in the surgical specimen, was found to correlate with disease severity based on preoperative sinus computed tomography staging (P =.009). Patients with myospherulosis tended to have a shorter interval between their last two surgeries than did control subjects (2.2 +/- 2.1 vs. 4.5 +/- 7.1 y, respectively [P =.086]). Patient age, sex, comorbid conditions, CT stage, and number of previous operations were not predictive for the occurrence of myospherulosis. Patients who develop myospherulosis from lipid-based packing material used during sinus surgery are more likely to form postoperative adhesions. These adhesions appear to be clinically relevant and may hasten the need for revision surgery.

  10. Performance analysis of visual tracking algorithms for motion-based user interfaces on mobile devices

    NASA Astrophysics Data System (ADS)

    Winkler, Stefan; Rangaswamy, Karthik; Tedjokusumo, Jefry; Zhou, ZhiYing

    2008-02-01

    Determining the self-motion of a camera is useful for many applications. A number of visual motion-tracking algorithms have been developed till date, each with their own advantages and restrictions. Some of them have also made their foray into the mobile world, powering augmented reality-based applications on phones with inbuilt cameras. In this paper, we compare the performances of three feature or landmark-guided motion tracking algorithms, namely marker-based tracking with MXRToolkit, face tracking based on CamShift, and MonoSLAM. We analyze and compare the complexity, accuracy, sensitivity, robustness and restrictions of each of the above methods. Our performance tests are conducted over two stages: The first stage of testing uses video sequences created with simulated camera movements along the six degrees of freedom in order to compare accuracy in tracking, while the second stage analyzes the robustness of the algorithms by testing for manipulative factors like image scaling and frame-skipping.

  11. GTRF: a game theory approach for regulating node behavior in real-time wireless sensor networks.

    PubMed

    Lin, Chi; Wu, Guowei; Pirozmand, Poria

    2015-06-04

    The selfish behaviors of nodes (or selfish nodes) cause packet loss, network congestion or even void regions in real-time wireless sensor networks, which greatly decrease the network performance. Previous methods have focused on detecting selfish nodes or avoiding selfish behavior, but little attention has been paid to regulating selfish behavior. In this paper, a Game Theory-based Real-time & Fault-tolerant (GTRF) routing protocol is proposed. GTRF is composed of two stages. In the first stage, a game theory model named VA is developed to regulate nodes' behaviors and meanwhile balance energy cost. In the second stage, a jumping transmission method is adopted, which ensures that real-time packets can be successfully delivered to the sink before a specific deadline. We prove that GTRF theoretically meets real-time requirements with low energy cost. Finally, extensive simulations are conducted to demonstrate the performance of our scheme. Simulation results show that GTRF not only balances the energy cost of the network, but also prolongs network lifetime.

  12. Optimization Studies of the FERMI at ELETTRA FEL Design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De Ninno, Giovanni; Fawley, William M.; Penn, Gregory E.

    The FERMI at ELETTRA project at Sincotrone Trieste involves two FEL's, each based upon the principle of seeded harmonic generation and using the existing ELETTRA injection linac at 1.2 GeV beam energy. Scheduled to be completed in 2008, FEL-1 will operate in 40-100 nm wavelength range and will involve one stage of harmonic up-conversion. The second undulator line, FEL-2, will begin operation two years later in the 10-40 nm wavelength range and use two harmonic stages operating as a cascade. The FEL design assumes continuous wavelength tunability over the full wavelength range, and polarization tunability of the output radiation includingmore » vertical or horizontal linear as well as helical polarization. The design considers focusing properties and segmentation of realizable undulators and available input seed lasers. We review the studies that have led to our current design. We present results of simulations using GENESIS and GINGER simulation codes including studies of various shot-to-shot fluctuations and undulator errors. Findings for the expected output radiation in terms of the power, transverse and longitudinal coherence are reported.« less

  13. Development and test of different methods to improve the description and NO{sub x} emissions in staged combustion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brink, A.; Kilpinen, P.; Hupa, M.

    1996-01-01

    Two methods to improve the modeling of NO{sub x} emissions in numerical flow simulation of combustion are investigated. The models used are a reduced mechanism for nitrogen chemistry in methane combustion and a new model based on regression analysis of perfectly stirred reactor simulations using detailed comprehensive reaction kinetics. The applicability of the methods to numerical flow simulation of practical furnaces, especially in the near burner region, is tested against experimental data from a pulverized coal fired single burner furnace. The results are also compared to those obtained using a commonly used description for the overall reaction rate of NO.

  14. End-To-End Simulation of Launch Vehicle Trajectories Including Stage Separation Dynamics

    NASA Technical Reports Server (NTRS)

    Albertson, Cindy W.; Tartabini, Paul V.; Pamadi, Bandu N.

    2012-01-01

    The development of methodologies, techniques, and tools for analysis and simulation of stage separation dynamics is critically needed for successful design and operation of multistage reusable launch vehicles. As a part of this activity, the Constraint Force Equation (CFE) methodology was developed and implemented in the Program to Optimize Simulated Trajectories II (POST2). The objective of this paper is to demonstrate the capability of POST2/CFE to simulate a complete end-to-end mission. The vehicle configuration selected was the Two-Stage-To-Orbit (TSTO) Langley Glide Back Booster (LGBB) bimese configuration, an in-house concept consisting of a reusable booster and an orbiter having identical outer mold lines. The proximity and isolated aerodynamic databases used for the simulation were assembled using wind-tunnel test data for this vehicle. POST2/CFE simulation results are presented for the entire mission, from lift-off, through stage separation, orbiter ascent to orbit, and booster glide back to the launch site. Additionally, POST2/CFE stage separation simulation results are compared with results from industry standard commercial software used for solving dynamics problems involving multiple bodies connected by joints.

  15. Two-Stage Bayesian Model Averaging in Endogenous Variable Models*

    PubMed Central

    Lenkoski, Alex; Eicher, Theo S.; Raftery, Adrian E.

    2013-01-01

    Economic modeling in the presence of endogeneity is subject to model uncertainty at both the instrument and covariate level. We propose a Two-Stage Bayesian Model Averaging (2SBMA) methodology that extends the Two-Stage Least Squares (2SLS) estimator. By constructing a Two-Stage Unit Information Prior in the endogenous variable model, we are able to efficiently combine established methods for addressing model uncertainty in regression models with the classic technique of 2SLS. To assess the validity of instruments in the 2SBMA context, we develop Bayesian tests of the identification restriction that are based on model averaged posterior predictive p-values. A simulation study showed that 2SBMA has the ability to recover structure in both the instrument and covariate set, and substantially improves the sharpness of resulting coefficient estimates in comparison to 2SLS using the full specification in an automatic fashion. Due to the increased parsimony of the 2SBMA estimate, the Bayesian Sargan test had a power of 50 percent in detecting a violation of the exogeneity assumption, while the method based on 2SLS using the full specification had negligible power. We apply our approach to the problem of development accounting, and find support not only for institutions, but also for geography and integration as development determinants, once both model uncertainty and endogeneity have been jointly addressed. PMID:24223471

  16. Confidence intervals for correlations when data are not normal.

    PubMed

    Bishara, Anthony J; Hittner, James B

    2017-02-01

    With nonnormal data, the typical confidence interval of the correlation (Fisher z') may be inaccurate. The literature has been unclear as to which of several alternative methods should be used instead, and how extreme a violation of normality is needed to justify an alternative. Through Monte Carlo simulation, 11 confidence interval methods were compared, including Fisher z', two Spearman rank-order methods, the Box-Cox transformation, rank-based inverse normal (RIN) transformation, and various bootstrap methods. Nonnormality often distorted the Fisher z' confidence interval-for example, leading to a 95 % confidence interval that had actual coverage as low as 68 %. Increasing the sample size sometimes worsened this problem. Inaccurate Fisher z' intervals could be predicted by a sample kurtosis of at least 2, an absolute sample skewness of at least 1, or significant violations of normality hypothesis tests. Only the Spearman rank-order and RIN transformation methods were universally robust to nonnormality. Among the bootstrap methods, an observed imposed bootstrap came closest to accurate coverage, though it often resulted in an overly long interval. The results suggest that sample nonnormality can justify avoidance of the Fisher z' interval in favor of a more robust alternative. R code for the relevant methods is provided in supplementary materials.

  17. Screening and cervical cancer cure: population based cohort study

    PubMed Central

    Andersson, Therese M-L; Lambert, Paul C; Kemetli, Levent; Silfverdal, Lena; Strander, Björn; Ryd, Walter; Dillner, Joakim; Törnberg, Sven; Sparén, Pär

    2012-01-01

    Objective To determine whether detection of invasive cervical cancer by screening results in better prognosis or merely increases the lead time until death. Design Nationwide population based cohort study. Setting Sweden. Participants All 1230 women with cervical cancer diagnosed during 1999-2001 in Sweden prospectively followed up for an average of 8.5 years. Main outcome measures Cure proportions and five year relative survival ratios, stratified by screening history, mode of detection, age, histopathological type, and FIGO (International Federation of Gynecology and Obstetrics) stage. Results In the screening ages, the cure proportion for women with screen detected invasive cancer was 92% (95% confidence interval 75% to 98%) and for symptomatic women was 66% (62% to 70%), a statistically significant difference in cure of 26% (16% to 36%). Among symptomatic women, the cure proportion was significantly higher for those who had been screened according to recommendations (interval cancers) than among those overdue for screening: difference in cure 14% (95% confidence interval 6% to 23%). Cure proportions were similar for all histopathological types except small cell carcinomas and were closely related to FIGO stage. A significantly higher cure proportion for screen detected cancers remained after adjustment for stage at diagnosis (difference 15%, 7% to 22%). Conclusions Screening is associated with improved cure of cervical cancer. Confounding cannot be ruled out, but the effect was not attributable to lead time bias and was larger than what is reflected by down-staging. Evaluations of screening programmes should consider the assessment of cure proportions. PMID:22381677

  18. Screening and cervical cancer cure: population based cohort study.

    PubMed

    Andrae, Bengt; Andersson, Therese M-L; Lambert, Paul C; Kemetli, Levent; Silfverdal, Lena; Strander, Björn; Ryd, Walter; Dillner, Joakim; Törnberg, Sven; Sparén, Pär

    2012-03-01

    To determine whether detection of invasive cervical cancer by screening results in better prognosis or merely increases the lead time until death. Nationwide population based cohort study. Sweden. All 1230 women with cervical cancer diagnosed during 1999-2001 in Sweden prospectively followed up for an average of 8.5 years. Cure proportions and five year relative survival ratios, stratified by screening history, mode of detection, age, histopathological type, and FIGO (International Federation of Gynecology and Obstetrics) stage. In the screening ages, the cure proportion for women with screen detected invasive cancer was 92% (95% confidence interval 75% to 98%) and for symptomatic women was 66% (62% to 70%), a statistically significant difference in cure of 26% (16% to 36%). Among symptomatic women, the cure proportion was significantly higher for those who had been screened according to recommendations (interval cancers) than among those overdue for screening: difference in cure 14% (95% confidence interval 6% to 23%). Cure proportions were similar for all histopathological types except small cell carcinomas and were closely related to FIGO stage. A significantly higher cure proportion for screen detected cancers remained after adjustment for stage at diagnosis (difference 15%, 7% to 22%). Screening is associated with improved cure of cervical cancer. Confounding cannot be ruled out, but the effect was not attributable to lead time bias and was larger than what is reflected by down-staging. Evaluations of screening programmes should consider the assessment of cure proportions.

  19. Enhancing Flood Prediction Reliability Using Bayesian Model Averaging

    NASA Astrophysics Data System (ADS)

    Liu, Z.; Merwade, V.

    2017-12-01

    Uncertainty analysis is an indispensable part of modeling the hydrology and hydrodynamics of non-idealized environmental systems. Compared to reliance on prediction from one model simulation, using on ensemble of predictions that consider uncertainty from different sources is more reliable. In this study, Bayesian model averaging (BMA) is applied to Black River watershed in Arkansas and Missouri by combining multi-model simulations to get reliable deterministic water stage and probabilistic inundation extent predictions. The simulation ensemble is generated from 81 LISFLOOD-FP subgrid model configurations that include uncertainty from channel shape, channel width, channel roughness and discharge. Model simulation outputs are trained with observed water stage data during one flood event, and BMA prediction ability is validated for another flood event. Results from this study indicate that BMA does not always outperform all members in the ensemble, but it provides relatively robust deterministic flood stage predictions across the basin. Station based BMA (BMA_S) water stage prediction has better performance than global based BMA (BMA_G) prediction which is superior to the ensemble mean prediction. Additionally, high-frequency flood inundation extent (probability greater than 60%) in BMA_G probabilistic map is more accurate than the probabilistic flood inundation extent based on equal weights.

  20. Optimizing Parameters of Axial Pressure-Compounded Ultra-Low Power Impulse Turbines at Preliminary Design

    NASA Astrophysics Data System (ADS)

    Kalabukhov, D. S.; Radko, V. M.; Grigoriev, V. A.

    2018-01-01

    Ultra-low power turbine drives are used as energy sources in auxiliary power systems, energy units, terrestrial, marine, air and space transport within the confines of shaft power N td = 0.01…10 kW. In this paper we propose a new approach to the development of surrogate models for evaluating the integrated efficiency of multistage ultra-low power impulse turbine with pressure stages. This method is based on the use of existing mathematical models of ultra-low power turbine stage efficiency and mass. It has been used in a method for selecting the rational parameters of two-stage axial ultra-low power turbine. The article describes the basic features of an algorithm for two-stage turbine parameters optimization and for efficiency criteria evaluating. Pledged mathematical models are intended for use at the preliminary design of turbine drive. The optimization method was tested at preliminary design of an air starter turbine. Validation was carried out by comparing the results of optimization calculations and numerical gas-dynamic simulation in the Ansys CFX package. The results indicate a sufficient accuracy of used surrogate models for axial two-stage turbine parameters selection

  1. Point of No Return From Water Loss in Coptotermes formosanus (Isoptera: Rhinotermitidae).

    PubMed

    Gautam, Bal K; Henderson, Gregg

    2015-08-01

    Describing desiccation stages based on the physical appearance of termites has not been evaluated previously. Formosan subterranean termites were studied to determine the rate of water loss, singly and in groups, in the laboratory. The stages of water loss are described based on changes in physical appearance and percent total body water loss evaluated at 2- to 8-h time intervals up to 32 h. Workers in groups lost water slower than individual worker trials. Weight loss was linear over time for worker groups and individuals, as was individual soldier only trials. Water loss in individual workers was significantly faster than in soldiers. Three physical stages of desiccation are described for living workers: (I) curling of antennae, (II) on back but with assistance able to right themselves and walk, and (III) unable to get off back; and two stages for living soldiers (II and III). Recovery was determined from termites in a second trial by transferring stage I, II, and III individuals from open, dry Petri dishes to those with moist filter paper at 4, 6, 10, 12, 14, 16, 24, 26 and 28 h. After 12 h on moist filter paper, stage I workers had a 83% recovery rate, stage II had a 33%, and stage III had a 7% recovery. Soldiers had a 56% recovery at stage II and was similar to the recovery of workers at stage III. Most termites that reached stage III were destined to die. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  2. Virtual gaming to develop students' pediatric nursing skills: A usability test.

    PubMed

    Verkuyl, Margaret; Atack, Lynda; Mastrilli, Paula; Romaniuk, Daria

    2016-11-01

    As competition for specialty clinical placements increases, there is an urgent need to create safe, stimulating, alternative learning environments for students. To address that clinical gap, our team developed a virtual game-based simulation to help nursing students develop their pediatric nursing skills. A usability study was conducted using the Technology Acceptance Model as a research framework. The study was conducted at a community college and included nursing students, nursing faculty/clinicians and two gaming experts. The two experts evaluated the game using a heuristic checklist after playing the game. Participants engaged in a think-aloud activity while playing the game and completed a survey and interview based on the Technology Acceptance Model to explore ease of use and utility of the game. We found a high degree of user satisfaction with the game. Students reported that they had learned about pediatric care, they had become immersed in the game and they were keen to keep playing. Several design changes were recommended. Usability testing is critical in the early stages of simulation development and the study provided useful direction for the development team in the next stage of game development. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Standardized likelihood ratio test for comparing several log-normal means and confidence interval for the common mean.

    PubMed

    Krishnamoorthy, K; Oral, Evrim

    2017-12-01

    Standardized likelihood ratio test (SLRT) for testing the equality of means of several log-normal distributions is proposed. The properties of the SLRT and an available modified likelihood ratio test (MLRT) and a generalized variable (GV) test are evaluated by Monte Carlo simulation and compared. Evaluation studies indicate that the SLRT is accurate even for small samples, whereas the MLRT could be quite liberal for some parameter values, and the GV test is in general conservative and less powerful than the SLRT. Furthermore, a closed-form approximate confidence interval for the common mean of several log-normal distributions is developed using the method of variance estimate recovery, and compared with the generalized confidence interval with respect to coverage probabilities and precision. Simulation studies indicate that the proposed confidence interval is accurate and better than the generalized confidence interval in terms of coverage probabilities. The methods are illustrated using two examples.

  4. Using simulation to isolate physician variation in intensive care unit admission decision making for critically ill elders with end-stage cancer: a pilot feasibility study.

    PubMed

    Barnato, Amber E; Hsu, Heather E; Bryce, Cindy L; Lave, Judith R; Emlet, Lillian L; Angus, Derek C; Arnold, Robert M

    2008-12-01

    To determine the feasibility of high-fidelity simulation for studying variation in intensive care unit admission decision making for critically ill elders with end-stage cancer. Mixed qualitative and quantitative analysis of physician subjects participating in a simulation scenario using hospital set, actors, medical chart, and vital signs tracings. The simulation depicted a 78-yr-old man with metastatic gastric cancer, life-threatening hypoxia most likely attributable to cancer progression, and stable preferences to avoid intensive care unit admission and intubation. Two independent raters assessed the simulations and subjects completed a postsimulation web-based survey and debriefing interview. Peter M. Winter Institute for Simulation Education and Research at the University of Pittsburgh. Twenty-seven hospital-based attending physicians, including 6 emergency physicians, 13 hospitalists, and 8 intensivists. Outcomes included qualitative report of clinical verisimilitude during the debriefing interview, survey-reported diagnosis and prognosis, and observed treatment decisions. Independent variables included physician demographics, risk attitude, and reactions to uncertainty. All (100%) reported that the case and simulation were highly realistic, and their diagnostic and prognostic assessments were consistent with our intent. Eight physicians (29.6%) admitted the patient to the intensive care unit. Among the eight physicians who admitted the patient to the intensive care unit, three (37%) initiated palliation, two (25%) documented the patient's code status (do not intubate/do not resuscitate), and one intubated the patient. Among the 19 physicians who did not admit the patient to the intensive care unit, 13 (68%) initiated palliation and 5 (42%) documented code status. Intensivists and emergency physicians (p = 0.048) were more likely to admit the patient to the intensive care unit. Years since medical school graduation were inversely associated with the initiation of palliative care (p = 0.043). Simulation can reproduce the decision context of intensive care unit triage for a critically ill patient with terminal illness. When faced with an identical patient, hospital-based physicians from the same institution vary significantly in their treatment decisions.

  5. Survival outcomes after radiation therapy for stage III non-small-cell lung cancer after adoption of computed tomography-based simulation.

    PubMed

    Chen, Aileen B; Neville, Bridget A; Sher, David J; Chen, Kun; Schrag, Deborah

    2011-06-10

    Technical studies suggest that computed tomography (CT) -based simulation improves the therapeutic ratio for thoracic radiation therapy (TRT), although few studies have evaluated its use or impact on outcomes. We used the Surveillance, Epidemiology and End Results (SEER) -Medicare linked data to identify CT-based simulation for TRT among Medicare beneficiaries diagnosed with stage III non-small-cell lung cancer (NSCLC) between 2000 and 2005. Demographic and clinical factors associated with use of CT simulation were identified, and the impact of CT simulation on survival was analyzed by using Cox models and propensity score analysis. The proportion of patients treated with TRT who had CT simulation increased from 2.4% in 1994 to 34.0% in 2000 to 77.6% in 2005. Of the 5,540 patients treated with TRT from 2000 to 2005, 60.1% had CT simulation. Geographic variation was seen in rates of CT simulation, with lower rates in rural areas and in the South and West compared with those in the Northeast and Midwest. Patients treated with chemotherapy were more likely to have CT simulation (65.2% v 51.2%; adjusted odds ratio, 1.67; 95% CI, 1.48 to 1.88; P < .01), although there was no significant association between use of surgery and CT simulation. Controlling for demographic and clinical characteristics, CT simulation was associated with lower risk of death (adjusted hazard ratio, 0.77; 95% CI, 0.73 to 0.82; P < .01) compared with conventional simulation. CT-based simulation has been widely, although not uniformly, adopted for the treatment of stage III NSCLC and is associated with higher survival among patients receiving TRT.

  6. Nanoscale Dewetting Transition in Protein Complex Folding

    PubMed Central

    Hua, Lan; Huang, Xuhui; Liu, Pu; Zhou, Ruhong; Berne, Bruce J.

    2011-01-01

    In a previous study, a surprising drying transition was observed to take place inside the nanoscale hydrophobic channel in the tetramer of the protein melittin. The goal of this paper is to determine if there are other protein complexes capable of displaying a dewetting transition during their final stage of folding. We searched the entire protein data bank (PDB) for all possible candidates, including protein tetramers, dimers, and two-domain proteins, and then performed the molecular dynamics (MD) simulations on the top candidates identified by a simple hydrophobic scoring function based on aligned hydrophobic surface areas. Our large scale MD simulations found several more proteins, including three tetramers, six dimers, and two two-domain proteins, which display a nanoscale dewetting transition in their final stage of folding. Even though the scoring function alone is not sufficient (i.e., a high score is necessary but not sufficient) in identifying the dewetting candidates, it does provide useful insights into the features of complex interfaces needed for dewetting. All top candidates have two features in common: (1) large aligned (matched) hydrophobic areas between two corresponding surfaces, and (2) large connected hydrophobic areas on the same surface. We have also studied the effect on dewetting of different water models and different treatments of the long-range electrostatic interactions (cutoff vs PME), and found the dewetting phenomena is fairly robust. This work presents a few proteins other than melittin tetramer for further experimental studies of the role of dewetting in the end stages of protein folding. PMID:17608515

  7. A manufacturing quality assessment model based-on two stages interval type-2 fuzzy logic

    NASA Astrophysics Data System (ADS)

    Purnomo, Muhammad Ridwan Andi; Helmi Shintya Dewi, Intan

    2016-01-01

    This paper presents the development of an assessment models for manufacturing quality using Interval Type-2 Fuzzy Logic (IT2-FL). The proposed model is developed based on one of building block in sustainable supply chain management (SSCM), which is benefit of SCM, and focuses more on quality. The proposed model can be used to predict the quality level of production chain in a company. The quality of production will affect to the quality of product. Practically, quality of production is unique for every type of production system. Hence, experts opinion will play major role in developing the assessment model. The model will become more complicated when the data contains ambiguity and uncertainty. In this study, IT2-FL is used to model the ambiguity and uncertainty. A case study taken from a company in Yogyakarta shows that the proposed manufacturing quality assessment model can work well in determining the quality level of production.

  8. Interval Predictor Models for Data with Measurement Uncertainty

    NASA Technical Reports Server (NTRS)

    Lacerda, Marcio J.; Crespo, Luis G.

    2017-01-01

    An interval predictor model (IPM) is a computational model that predicts the range of an output variable given input-output data. This paper proposes strategies for constructing IPMs based on semidefinite programming and sum of squares (SOS). The models are optimal in the sense that they yield an interval valued function of minimal spread containing all the observations. Two different scenarios are considered. The first one is applicable to situations where the data is measured precisely whereas the second one is applicable to data subject to known biases and measurement error. In the latter case, the IPMs are designed to fully contain regions in the input-output space where the data is expected to fall. Moreover, we propose a strategy for reducing the computational cost associated with generating IPMs as well as means to simulate them. Numerical examples illustrate the usage and performance of the proposed formulations.

  9. Online evolution reconstruction from a single measurement record with random time intervals for quantum communication

    NASA Astrophysics Data System (ADS)

    Zhou, Hua; Su, Yang; Wang, Rong; Zhu, Yong; Shen, Huiping; Pu, Tao; Wu, Chuanxin; Zhao, Jiyong; Zhang, Baofu; Xu, Zhiyong

    2017-10-01

    Online reconstruction of a time-variant quantum state from the encoding/decoding results of quantum communication is addressed by developing a method of evolution reconstruction from a single measurement record with random time intervals. A time-variant two-dimensional state is reconstructed on the basis of recovering its expectation value functions of three nonorthogonal projectors from a random single measurement record, which is composed from the discarded qubits of the six-state protocol. The simulated results prove that our method is robust to typical metro quantum channels. Our work extends the Fourier-based method of evolution reconstruction from the version for a regular single measurement record with equal time intervals to a unified one, which can be applied to arbitrary single measurement records. The proposed protocol of evolution reconstruction runs concurrently with the one of quantum communication, which can facilitate the online quantum tomography.

  10. The contrasting effects of short-term climate change on the early recruitment of tree species.

    PubMed

    Ibáñez, Inés; Katz, Daniel S W; Lee, Benjamin R

    2017-07-01

    Predictions of plant responses to climate change are frequently based on organisms' presence in warmer locations, which are then assumed to reflect future performance in cooler areas. However, as plant life stages may be affected differently by environmental changes, there is little empirical evidence that this approach provides reliable estimates of short-term responses to global warming. Under this premise, we analyzed 8 years of early recruitment data, seed production and seedling establishment and survival, collected for two tree species at two latitudes. We quantified recruitment to a wide range of environmental conditions, temperature, soil moisture and light, and simulated recruitment under two forecasted climatic scenarios. Annual demographic transitions were affected by the particular conditions taking place during their onset, but the effects of similar environmental shifts differed among the recruitment stages; seed production was higher in warmer years, while seedling establishment and survival peaked during cold years. Within a species, these effects also varied between latitudes; increasing temperatures at the southern location will have stronger detrimental effects on recruitment than similar changes at the northern locations. Our simulations illustrate that warmer temperatures may increase seed production, but they will have a negative effect on establishment and survival. When the three early recruitment processes were simultaneously considered, simulations showed little change in recruitment dynamics at the northern site and a slight decrease at the southern site. It is only when we considered these three stages that we were able to assess likely changes in early recruitment under the predicted conditions.

  11. Terrestrial biosphere changes over the last 120 kyr and their impact on ocean δ 13C

    NASA Astrophysics Data System (ADS)

    Hoogakker, B. A. A.; Smith, R. S.; Singarayer, J. S.; Marchant, R.; Prentice, I. C.; Allen, J. R. M.; Anderson, R. S.; Bhagwat, S. A.; Behling, H.; Borisova, O.; Bush, M.; Correa-Metrio, A.; de Vernal, A.; Finch, J. M.; Fréchette, B.; Lozano-Garcia, S.; Gosling, W. D.; Granoszewski, W.; Grimm, E. C.; Grüger, E.; Hanselman, J.; Harrison, S. P.; Hill, T. R.; Huntley, B.; Jiménez-Moreno, G.; Kershaw, P.; Ledru, M.-P.; Magri, D.; McKenzie, M.; Müller, U.; Nakagawa, T.; Novenko, E.; Penny, D.; Sadori, L.; Scott, L.; Stevenson, J.; Valdes, P. J.; Vandergoes, M.; Velichko, A.; Whitlock, C.; Tzedakis, C.

    2015-03-01

    A new global synthesis and biomization of long (>40 kyr) pollen-data records is presented, and used with simulations from the HadCM3 and FAMOUS climate models to analyse the dynamics of the global terrestrial biosphere and carbon storage over the last glacial-interglacial cycle. Global modelled (BIOME4) biome distributions over time generally agree well with those inferred from pollen data. The two climate models show good agreement in global net primary productivity (NPP). NPP is strongly influenced by atmospheric carbon dioxide (CO2) concentrations through CO2 fertilization. The combined effects of modelled changes in vegetation and (via a simple model) soil carbon result in a global terrestrial carbon storage at the Last Glacial Maximum that is 210-470 Pg C less than in pre-industrial time. Without the contribution from exposed glacial continental shelves the reduction would be larger, 330-960 Pg C. Other intervals of low terrestrial carbon storage include stadial intervals at 108 and 85 ka BP, and between 60 and 65 ka BP during Marine Isotope Stage 4. Terrestrial carbon storage, determined by the balance of global NPP and decomposition, influences the stable carbon isotope composition (δ13C) of seawater because terrestrial organic carbon is depleted in 13C. Using a simple carbon-isotope mass balance equation we find agreement in trends between modelled ocean δ13C based on modelled land carbon storage, and palaeo-archives of ocean δ13C, confirming that terrestrial carbon storage variations may be important drivers of ocean δ13C changes.

  12. The Role of Simulation Approaches in Statistics

    ERIC Educational Resources Information Center

    Wood, Michael

    2005-01-01

    This article explores the uses of a simulation model (the two bucket story)--implemented by a stand-alone computer program, or an Excel workbook (both on the web)--that can be used for deriving bootstrap confidence intervals, and simulating various probability distributions. The strengths of the model are its generality, the fact that it provides…

  13. CMOS based capacitance to digital converter circuit for MEMS sensor

    NASA Astrophysics Data System (ADS)

    Rotake, D. R.; Darji, A. D.

    2018-02-01

    Most of the MEMS cantilever based system required costly instruments for characterization, processing and also has large experimental setups which led to non-portable device. So there is a need of low cost, highly sensitive, high speed and portable digital system. The proposed Capacitance to Digital Converter (CDC) interfacing circuit converts capacitance to digital domain which can be easily processed. Recent demand microcantilever deflection is part per trillion ranges which change the capacitance in 1-10 femto farad (fF) range. The entire CDC circuit is designed using CMOS 250nm technology. Design of CDC circuit consists of a D-latch and two oscillators, namely Sensor controlled oscillator (SCO) and digitally controlled oscillator (DCO). The D-latch is designed using transmission gate based MUX for power optimization. A CDC design of 7-stage, 9-stage and 11-stage tested for 1-18 fF and simulated using mentor graphics Eldo tool with parasitic. Since the proposed design does not use resistance component, the total power dissipation is reduced to 2.3621 mW for CDC designed using 9-stage SCO and DCO.

  14. Efficient low-bit-rate adaptive mesh-based motion compensation technique

    NASA Astrophysics Data System (ADS)

    Mahmoud, Hanan A.; Bayoumi, Magdy A.

    2001-08-01

    This paper proposes a two-stage global motion estimation method using a novel quadtree block-based motion estimation technique and an active mesh model. In the first stage, motion parameters are estimated by fitting block-based motion vectors computed using a new efficient quadtree technique, that divides a frame into equilateral triangle blocks using the quad-tree structure. Arbitrary partition shapes are achieved by allowing 4-to-1, 3-to-1 and 2-1 merge/combine of sibling blocks having the same motion vector . In the second stage, the mesh is constructed using an adaptive triangulation procedure that places more triangles over areas with high motion content, these areas are estimated during the first stage. finally the motion compensation is achieved by using a novel algorithm that is carried by both the encoder and the decoder to determine the optimal triangulation of the resultant partitions followed by affine mapping at the encoder. Computer simulation results show that the proposed method gives better performance that the conventional ones in terms of the peak signal-to-noise ration (PSNR) and the compression ratio (CR).

  15. Combination of radiation therapy and firocoxib for the treatment of canine nasal carcinoma.

    PubMed

    Cancedda, Simona; Sabattini, Silvia; Bettini, Giuliano; Leone, Vito F; Laganga, Paola; Rossi, Federica; Terragni, Rossella; Gnudi, Giacomo; Vignoli, Massimo

    2015-01-01

    Carcinomas represent two-thirds of canine nasosinal neoplasms. Although radiation therapy (RT) is the standard of care, the incidence of local recurrence following treatment is high. Cyclooxygenase-isoform-2 (COX-2) is expressed in 71-95% of canine nasal carcinomas and has been implicated in tumor growth and angiogenesis. Accordingly, COX-2 inhibition seems rational to improve outcome. Dogs with histologically confirmed, previously untreated nasal carcinomas were randomized to receive the combination of a selective COX-2 inhibitor (firocoxib) and palliative RT (Group 1) or RT and placebo (Group 2). Patients were regularly monitored with blood tests, urinalysis, and computed tomography. Pet owners were asked to complete monthly a quality-of-life questionnaire. Twenty-four dogs were prospectively enrolled. According to Adams modified system, there were five stage 1, five stage 2, three stage 3, and 11 stage 4 tumors. Two dogs had metastases to regional lymph nodes. Median progression-free interval and overall survival were 228 and 335 days in Group 1 (n = 12) and 234 and 244 days in Group 2 (n = 12). These differences were not statistically significant. The involvement of regional lymph nodes was significantly associated with progression-free interval and overall survival (P = 0.004). Quality of life was significantly improved in Group 1 (P = 0.008). In particular, a significant difference was observed for activity and appetite. Although not providing a significant enhancement of progression-free interval and overall survival, firocoxib in combination with RT is safe and improved life quality in dogs with nasal carcinomas. © 2015 American College of Veterinary Radiology.

  16. Flood-inundation maps for the Meramec River at Valley Park and at Fenton, Missouri, 2017

    USGS Publications Warehouse

    Dietsch, Benjamin J.; Sappington, Jacob N.

    2017-09-29

    Two sets of digital flood-inundation map libraries that spanned a combined 16.7-mile reach of the Meramec River that extends upstream from Valley Park, Missouri, to downstream from Fenton, Mo., were created by the U.S. Geological Survey (USGS) in cooperation with the U.S. Army Corps of Engineers, St. Louis Metropolitan Sewer District, Missouri Department of Transportation, Missouri American Water, and Federal Emergency Management Agency Region 7. The flood-inundation maps, which can be accessed through the USGS Flood Inundation Mapping Science website at https://water.usgs.gov/osw/flood_inundation/, depict estimates of the areal extent and depth of flooding corresponding to selected water levels (stages) at the cooperative USGS streamgages on the Meramec River at Valley Park, Mo., (USGS station number 07019130) and the Meramec River at Fenton, Mo. (USGS station number 07019210). Near-real-time stage data at these streamgages may be obtained from the USGS National Water Information System at https://waterdata.usgs.gov/nwis or the National Weather Service (NWS) Advanced Hydrologic Prediction Service at http:/water.weather.gov/ahps/, which also forecasts flood hydrographs at these sites (listed as NWS sites vllm7 and fnnm7, respectively).Flood profiles were computed for the stream reaches by means of a calibrated one-dimensional step-backwater hydraulic model. The model was calibrated using a stage-discharge relation at the Meramec River near Eureka streamgage (USGS station number 07019000) and documented high-water marks from the flood of December 2015 through January 2016.The calibrated hydraulic model was used to compute two sets of water-surface profiles: one set for the streamgage at Valley Park, Mo. (USGS station number 07019130), and one set for the USGS streamgage on the Meramec River at Fenton, Mo. (USGS station number 07019210). The water-surface profiles were produced for stages at 1-foot (ft) intervals referenced to the datum from each streamgage and ranging from the NWS action stage, or near bankfull discharge, to the stage corresponding to the estimated 0.2-percent annual exceedance probability (500-year recurrence interval) flood, as determined at the Eureka streamgage (USGS station number 07019000). The simulated water-surface profiles were then combined with a geographic information system digital elevation model (derived from light detection and ranging data having a 0.28-ft vertical accuracy and 3.28-ft horizontal resolution) to delineate the area flooded at each flood stage (water level).The availability of these maps, along with internet information regarding current stage from the USGS streamgages and forecasted high-flow stages from the NWS, will provide emergency management personnel and residents with information that is critical for flood response activities such as evacuations and road closures and for postflood recovery efforts.

  17. A measure of uncertainty regarding the interval constraint of normal mean elicited by two stages of a prior hierarchy.

    PubMed

    Kim, Hea-Jung

    2014-01-01

    This paper considers a hierarchical screened Gaussian model (HSGM) for Bayesian inference of normal models when an interval constraint in the mean parameter space needs to be incorporated in the modeling but when such a restriction is uncertain. An objective measure of the uncertainty, regarding the interval constraint, accounted for by using the HSGM is proposed for the Bayesian inference. For this purpose, we drive a maximum entropy prior of the normal mean, eliciting the uncertainty regarding the interval constraint, and then obtain the uncertainty measure by considering the relationship between the maximum entropy prior and the marginal prior of the normal mean in HSGM. Bayesian estimation procedure of HSGM is developed and two numerical illustrations pertaining to the properties of the uncertainty measure are provided.

  18. A Bayesian hierarchical model with novel prior specifications for estimating HIV testing rates

    PubMed Central

    An, Qian; Kang, Jian; Song, Ruiguang; Hall, H. Irene

    2016-01-01

    Human immunodeficiency virus (HIV) infection is a severe infectious disease actively spreading globally, and acquired immunodeficiency syndrome (AIDS) is an advanced stage of HIV infection. The HIV testing rate, that is, the probability that an AIDS-free HIV infected person seeks a test for HIV during a particular time interval, given no previous positive test has been obtained prior to the start of the time, is an important parameter for public health. In this paper, we propose a Bayesian hierarchical model with two levels of hierarchy to estimate the HIV testing rate using annual AIDS and AIDS-free HIV diagnoses data. At level one, we model the latent number of HIV infections for each year using a Poisson distribution with the intensity parameter representing the HIV incidence rate. At level two, the annual numbers of AIDS and AIDS-free HIV diagnosed cases and all undiagnosed cases stratified by the HIV infections at different years are modeled using a multinomial distribution with parameters including the HIV testing rate. We propose a new class of priors for the HIV incidence rate and HIV testing rate taking into account the temporal dependence of these parameters to improve the estimation accuracy. We develop an efficient posterior computation algorithm based on the adaptive rejection metropolis sampling technique. We demonstrate our model using simulation studies and the analysis of the national HIV surveillance data in the USA. PMID:26567891

  19. Detection of lung cancer through low-dose CT screening (NELSON): a prespecified analysis of screening test performance and interval cancers.

    PubMed

    Horeweg, Nanda; Scholten, Ernst Th; de Jong, Pim A; van der Aalst, Carlijn M; Weenink, Carla; Lammers, Jan-Willem J; Nackaerts, Kristiaan; Vliegenthart, Rozemarijn; ten Haaf, Kevin; Yousaf-Khan, Uraujh A; Heuvelmans, Marjolein A; Thunnissen, Erik; Oudkerk, Matthijs; Mali, Willem; de Koning, Harry J

    2014-11-01

    Low-dose CT screening is recommended for individuals at high risk of developing lung cancer. However, CT screening does not detect all lung cancers: some might be missed at screening, and others can develop in the interval between screens. The NELSON trial is a randomised trial to assess the effect of screening with increasing screening intervals on lung cancer mortality. In this prespecified analysis, we aimed to assess screening test performance, and the epidemiological, radiological, and clinical characteristics of interval cancers in NELSON trial participants assigned to the screening group. Eligible participants in the NELSON trial were those aged 50-75 years, who had smoked 15 or more cigarettes per day for more than 25 years or ten or more cigarettes for more than 30 years, and were still smoking or had quit less than 10 years ago. We included all participants assigned to the screening group who had attended at least one round of screening. Screening test results were based on volumetry using a two-step approach. Initially, screening test results were classified as negative, indeterminate, or positive based on nodule presence and volume. Subsequently, participants with an initial indeterminate result underwent follow-up screening to classify their final screening test result as negative or positive, based on nodule volume doubling time. We obtained information about all lung cancer diagnoses made during the first three rounds of screening, plus an additional 2 years of follow-up from the national cancer registry. We determined epidemiological, radiological, participant, and tumour characteristics by reassessing medical files, screening CTs, and clinical CTs. The NELSON trial is registered at www.trialregister.nl, number ISRCTN63545820. 15,822 participants were enrolled in the NELSON trial, of whom 7915 were assigned to low-dose CT screening with increasing interval between screens, and 7907 to no screening. We included 7155 participants in our study, with median follow-up of 8·16 years (IQR 7·56-8·56). 187 (3%) of 7155 screened participants were diagnosed with 196 screen-detected lung cancers, and another 34 (<1%; 19 [56%] in the first year after screening, and 15 [44%] in the second year after screening) were diagnosed with 35 interval cancers. For the three screening rounds combined, with a 2-year follow-up, sensitivity was 84·6% (95% CI 79·6-89·2), specificity was 98·6% (95% CI 98·5-98·8), positive predictive value was 40·4% (95% CI 35·9-44·7), and negative predictive value was 99·8% (95% CI 99·8-99·9). Retrospective assessment of the last screening CT and clinical CT in 34 patients with interval cancer showed that interval cancers were not visible in 12 (35%) cases. In the remaining cases, cancers were visible when retrospectively assessed, but were not diagnosed because of radiological detection and interpretation errors (17 [50%]), misclassification by the protocol (two [6%]), participant non-compliance (two [6%]), and non-adherence to protocol (one [3%]). Compared with screen-detected cancers, interval cancers were diagnosed at more advanced stages (29 [83%] of 35 interval cancers vs 44 [22%] of 196 screen-detected cancers diagnosed in stage III or IV; p<0·0001), were more often small-cell carcinomas (seven [20%] vs eight [4%]; p=0·003) and less often adenocarcinomas (nine [26%] vs 102 [52%]; p=0·005). Lung cancer screening in the NELSON trial yielded high specificity and sensitivity, with only a small number of interval cancers. The results of this study could be used to improve screening algorithms, and reduce the number of missed cancers. Zorgonderzoek Nederland Medische Wetenschappen and Koningin Wilhelmina Fonds. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Combined Experimental and Numerical Simulations of Thermal Barrier Coated Turbine Blades Erosion

    NASA Technical Reports Server (NTRS)

    Hamed, Awate; Tabakoff, Widen; Swar, Rohan; Shin, Dongyun; Woggon, Nthanial; Miller, Robert

    2013-01-01

    A combined experimental and computational study was conducted to investigate the erosion of thermal barrier coated (TBC) blade surfaces by alumina particles ingestion in a single stage turbine. In the experimental investigation, tests of particle surface interactions were performed in specially designed tunnels to determine the erosion rates and particle restitution characteristics under different impact conditions. The experimental results show that the erosion rates increase with increased impingement angle, impact velocity and temperature. In the computational simulations, an Euler-Lagrangian two stage approach is used in obtaining numerical solutions to the three-dimensional compressible Reynolds Averaged Navier-Stokes equations and the particles equations of motion in each blade passage reference frame. User defined functions (UDF) were developed to represent experimentally-based correlations for particle surface interaction models which were employed in the three-dimensional particle trajectory simulations to determine the particle rebound characteristics after each surface impact. The experimentally based erosion UDF model was used to predict the TBC erosion rates on the turbine blade surfaces based on the computed statistical data of the particles impact locations, velocities and angles relative to the blade surface. Computational results are presented for the predicted TBC blade erosion in a single stage commercial APU turbine, for a NASA designed automotive turbine, and for the NASA turbine scaled for modern rotorcraft operating conditions. The erosion patterns in the turbines are discussed for uniform particle ingestion and for particle ingestion concentrated in the inner and outer 5 percent of the stator blade span representing the flow cooling the combustor liner.

  1. Manipulation of radial-variant polarization for creating tunable bifocusing spots.

    PubMed

    Gu, Bing; Pan, Yang; Wu, Jia-Lu; Cui, Yiping

    2014-02-01

    We propose and generate a new radial-variant vector field (RV-VF) with a distribution of states of polarization described by the square of the radius and exploit its focusing property. Theoretically, we present the analytical expressions for the three-dimensional electric field of the vector field focused by a thin lens under the nonparaxial and paraxial approximations based on the vectorial Rayleigh-Sommerfeld formulas. Numerical simulations indicate that this focused field exhibits bifocusing spots along the optical axis. The underlying mechanism for generating the bifocusing property is analyzed in detail. We give the analytical formula for the interval between two foci. Experimentally, we generate the RV-VFs with alterable topological charge and demonstrate that the interval between two foci is controllable by tuning the radial topological charge. This particular focal field has specific applications for biparticle trapping, manipulating, alignment, transportation, and accelerating along the optical axis.

  2. Real-time flutter boundary prediction based on time series models

    NASA Astrophysics Data System (ADS)

    Gu, Wenjing; Zhou, Li

    2018-03-01

    For the purpose of predicting the flutter boundary in real time during flutter flight tests, two time series models accompanied with corresponding stability criterion are adopted in this paper. The first method simplifies a long nonstationary response signal as many contiguous intervals and each is considered to be stationary. The traditional AR model is then established to represent each interval of signal sequence. While the second employs a time-varying AR model to characterize actual measured signals in flutter test with progression variable speed (FTPVS). To predict the flutter boundary, stability parameters are formulated by the identified AR coefficients combined with Jury's stability criterion. The behavior of the parameters is examined using both simulated and wind-tunnel experiment data. The results demonstrate that both methods show significant effectiveness in predicting the flutter boundary at lower speed level. A comparison between the two methods is also given in this paper.

  3. On the Spike Train Variability Characterized by Variance-to-Mean Power Relationship.

    PubMed

    Koyama, Shinsuke

    2015-07-01

    We propose a statistical method for modeling the non-Poisson variability of spike trains observed in a wide range of brain regions. Central to our approach is the assumption that the variance and the mean of interspike intervals are related by a power function characterized by two parameters: the scale factor and exponent. It is shown that this single assumption allows the variability of spike trains to have an arbitrary scale and various dependencies on the firing rate in the spike count statistics, as well as in the interval statistics, depending on the two parameters of the power function. We also propose a statistical model for spike trains that exhibits the variance-to-mean power relationship. Based on this, a maximum likelihood method is developed for inferring the parameters from rate-modulated spike trains. The proposed method is illustrated on simulated and experimental spike trains.

  4. The prelaying interval of emperor geese on the Yukon-Kuskokwim Delta, Alaska

    USGS Publications Warehouse

    Hupp, Jerry W.; Schmutz, J.A.; Ely, Craig R.

    2006-01-01

    We marked 136 female Emperor Geese (Chen canagica) in western Alaska with VHF or satellite (PTT) transmitters from 1999 to 2003 to monitor their spring arrival and nest initiation dates on the Yukon Delta, and to estimate prelaying interval lengths once at the nesting area. Ninety-two females with functional transmitters returned to the Yukon Delta in the spring after they were marked, and we located the nests of 35 of these individuals. Prelaying intervals were influenced by when snow melted in the spring and individual arrival dates on the Yukon Delta. The median prelaying interval was 15 days (range = 12-19 days) in a year when snow melted relatively late, and 11 days (range = 4-16 days) in two warmer years when snow melted earlier. In years when snow melted earlier, prelaying intervals of <12 days for 11 of 15 females suggested they initiated rapid follicle development on spring staging areas. The prelaying interval declined by approximately 0.4 days and nest initiation date increased approximately 0.5 days for each day a female delayed her arrival. Thus, females that arrived first on the Yukon Delta had prelaying intervals up to four days longer, yet they nested up to five days earlier, than females that arrived last. The proximity of spring staging areas on the Alaska Peninsula to nesting areas on the Yukon Delta may enable Emperor Geese to alter timing of follicle development depending on annual conditions, and to invest nutrients acquired from both areas in eggs during their formation. Plasticity in timing of follicle development is likely advantageous in a variable environment where melting of snow cover in the spring can vary by 2-3 weeks annually. ?? The Cooper Ornithological Society 2006.

  5. Flow units classification for geostatisitical three-dimensional modeling of a non-marine sandstone reservoir: A case study from the Paleocene Funing Formation of the Gaoji Oilfield, east China

    NASA Astrophysics Data System (ADS)

    Zhang, Penghui; Zhang, Jinliang; Wang, Jinkai; Li, Ming; Liang, Jie; Wu, Yingli

    2018-05-01

    Flow units classification can be used in reservoir characterization. In addition, characterizing the reservoir interval into flow units is an effective way to simulate the reservoir. Paraflow units (PFUs), the second level of flow units, are used to estimate the spatial distribution of continental clastic reservoirs at the detailed reservoir description stage. In this study, we investigate a nonroutine methodology to predict the external and internal distribution of PFUs. The methodology outlined enables the classification of PFUs using sandstone core samples and log data. The relationships obtained between porosity, permeability and pore throat aperture radii (r35) values were established for core and log data obtained from 26 wells from the Funing Formation, Gaoji Oilfield, Subei Basin, China. The present study refines predicted PFUs at logged (0.125-m) intervals, whose scale is much smaller than routine methods. Meanwhile, three-dimensional models are built using sequential indicator simulation to characterize PFUs in wells. Four distinct PFUs are classified and located based on the statistical methodology of cluster analysis, and each PFU has different seepage ability. The results of this study demonstrate the obtained models are able to quantify reservoir heterogeneity. Due to different petrophysical characteristics and seepage ability, PFUs have a significant impact on the distribution of the remaining oil. Considering these allows a more accurate understanding of reservoir quality, especially within non-marine sandstone reservoirs.

  6. Breast cancer screening programmes: the development of a monitoring and evaluation system.

    PubMed

    Day, N E; Williams, D R; Khaw, K T

    1989-06-01

    It is important that the introduction of breast screening is closely monitored. The anticipated effect on breast cancer mortality will take 10 years or more fully to emerge, and will only occur if a succession of more short-term end points are met. Data from the Swedish two-county randomised trial provide targets that should be achieved, following a logical progression of compliance with the initial invitation, prevalence and stage distribution at the prevalence screen, the rate of interval cancers after the initial screen, the pick-up rate and stage distribution at later screening tests, the rate of interval cancers after later tests, the absolute rate of advanced cancer and finally the breast cancer mortality rate. For evaluation purposes, historical data on stage at diagnosis is desirable; it is suggested that tumour size is probably the most relevant variable available in most cases.

  7. Picosecond-precision multichannel autonomous time and frequency counter

    NASA Astrophysics Data System (ADS)

    Szplet, R.; Kwiatkowski, P.; RóŻyc, K.; Jachna, Z.; Sondej, T.

    2017-12-01

    This paper presents the design, implementation, and test results of a multichannel time interval and frequency counter developed as a desktop instrument. The counter contains four main functional modules for (1) performing precise measurements, (2) controlling and fast data processing, (3) low-noise power suppling, and (4) supplying a stable reference clock (optional rubidium standard). A fundamental for the counter, the time interval measurement is based on time stamping combined with a period counting and in-period two-stage time interpolation that allows us to achieve wide measurement range (above 1 h), high precision (even better than 4.5 ps), and high measurement speed (up to 91.2 × 106 timestamps/s). The frequency is measured up to 3.0 GHz with the use of the reciprocal method. Wide functionality of the counter includes also the evaluation of frequency stability of clocks and oscillators (Allan deviation) and phase variation (time interval error, maximum time interval error, time deviation). The 8-channel measurement module is based on a field programmable gate array device, while the control unit involves a microcontroller with a high performance ARM-Cortex core. An efficient and user-friendly control of the counter is provided either locally, through the built-in keypad or/and color touch panel, or remotely, with the aid of USB, Ethernet, RS232C, or RS485 interfaces.

  8. Picosecond-precision multichannel autonomous time and frequency counter.

    PubMed

    Szplet, R; Kwiatkowski, P; Różyc, K; Jachna, Z; Sondej, T

    2017-12-01

    This paper presents the design, implementation, and test results of a multichannel time interval and frequency counter developed as a desktop instrument. The counter contains four main functional modules for (1) performing precise measurements, (2) controlling and fast data processing, (3) low-noise power suppling, and (4) supplying a stable reference clock (optional rubidium standard). A fundamental for the counter, the time interval measurement is based on time stamping combined with a period counting and in-period two-stage time interpolation that allows us to achieve wide measurement range (above 1 h), high precision (even better than 4.5 ps), and high measurement speed (up to 91.2 × 10 6 timestamps/s). The frequency is measured up to 3.0 GHz with the use of the reciprocal method. Wide functionality of the counter includes also the evaluation of frequency stability of clocks and oscillators (Allan deviation) and phase variation (time interval error, maximum time interval error, time deviation). The 8-channel measurement module is based on a field programmable gate array device, while the control unit involves a microcontroller with a high performance ARM-Cortex core. An efficient and user-friendly control of the counter is provided either locally, through the built-in keypad or/and color touch panel, or remotely, with the aid of USB, Ethernet, RS232C, or RS485 interfaces.

  9. Cognitive control level of action for analyzing verbal reports in educative clinical simulation situations.

    PubMed

    Morineau, Thierry; Meineri, Sebastien; Chapelain, Pascal

    2017-03-01

    Several methods and theoretical frameworks have been proposed for efficient debriefing after clinical simulation sessions. In these studies, however, the cognitive processes underlying the debriefing stage are not directly addressed. Cognitive control constitutes a conceptual link between behavior and reflection on behavior to apprehend debriefing cognitively. Our goal was to analyze cognitive control from verbal reports using the Skill-Rule-Knowledge model. This model considers different cognitive control levels from skill-based to rule-based and knowledge-based control. An experiment was conducted with teams of nursing students who were confronted with emergency scenarios during high-fidelity simulation sessions. Participants' descriptions of their actions were asked in the course of the simulation scenarios or during the debriefing stage. 52 nursing students working in 26 pairs participated in this study. Participants were divided into two groups: an "in situ" group in which they had to describe their actions at different moments of a deteriorating patient scenario, and a "debriefing" group, in which, at the same moments, they had to describe their actions displayed on a video recording. In addition to a cognitive analysis, the teams' clinical performance was measured. The cognitive control level in the debriefing group was generally higher than in the in situ group. Good team performance was associated with a high level of cognitive control after a patient's significant state deterioration. These findings are in conformity with the "Skill-Rule-Knowledge" model. The debriefing stage allows a deeper reflection on action compared with the in situ condition. If an abnormal event occurs as an adverse event, then participants' mental processes tend to migrate towards knowledge-based control. This migration particularly concerns students with the best clinical performance. Thus, this cognitive framework can help to strengthen the analysis of verbal reports. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. A hybrid system dynamics and optimization approach for supporting sustainable water resources planning in Zhengzhou City, China

    NASA Astrophysics Data System (ADS)

    Li, Zhi; Li, Chunhui; Wang, Xuan; Peng, Cong; Cai, Yanpeng; Huang, Weichen

    2018-01-01

    Problems with water resources restrict the sustainable development of a city with water shortages. Based on system dynamics (SD) theory, a model of sustainable utilization of water resources using the STELLA software has been established. This model consists of four subsystems: population system, economic system, water supply system and water demand system. The boundaries of the four subsystems are vague, but they are closely related and interdependent. The model is applied to Zhengzhou City, China, which has a serious water shortage. The difference between the water supply and demand is very prominent in Zhengzhou City. The model was verified with data from 2009 to 2013. The results show that water demand of Zhengzhou City will reach 2.57 billion m3 in 2020. A water resources optimization model is developed based on interval-parameter two-stage stochastic programming. The objective of the model is to allocate water resources to each water sector and make the lowest cost under the minimum water demand. Using the simulation results, decision makers can easily weigh the costs of the system, the water allocation objectives, and the system risk. The hybrid system dynamics method and optimization model is a rational try to support water resources management in many cities, particularly for cities with potential water shortage and it is solidly supported with previous studies and collected data.

  11. Control of flea populations in a simulated home environment model using lufenuron, imidacloprid or fipronil.

    PubMed

    Jacobs, D E; Hutchinson, M J; Ryan, W G

    2001-03-01

    Control strategies were evaluated over a 6-month period in a home simulation model comprising a series of similar carpeted pens, housing matched groups of six cats, in which the life-cycle of the flea Ctenocephalides felis felis Bouche (Siphonaptera: Pulicidae) had been established. Additional adult fleas were placed on the cats at intervals to mimic acquisition of extraneous fleas from outside the home. Treatment strategies included a single subcutaneous deposition of injectable lufenuron supported by initial treatments with a short-acting insecticidal spray, or monthly topical applications of imidacloprid or fipronil. An untreated control group indicated that conditions were suitable for flea replication and development. Controls had to be combed on 18 occasions to remove excessive flea burdens and two developed allergic reactions. Lufenuron cats were combed once and required two insecticidal treatments in the first month to achieve control. Even so, small flea burdens were constantly present thereafter. Imidacloprid and fipronil treatments appeared to give virtually complete control throughout. Single fleas were found on imidacloprid cats on two occasions, whereas none were recovered from fipronil cats at any time after the first treatment. Tracer cats were used to monitor re-infestation rates at the end of the trial period. Small numbers of host-seeking fleas were demonstrated in all treatment pens, indicating that total eradication had not been accomplished. It is concluded that the home environment simulation model incorporating tracer animals could provide a powerful tool for studying flea population dynamics under controlled conditions but improved techniques are needed for quantifying other off-host life-cycle stages.

  12. Interpretable functional principal component analysis.

    PubMed

    Lin, Zhenhua; Wang, Liangliang; Cao, Jiguo

    2016-09-01

    Functional principal component analysis (FPCA) is a popular approach to explore major sources of variation in a sample of random curves. These major sources of variation are represented by functional principal components (FPCs). The intervals where the values of FPCs are significant are interpreted as where sample curves have major variations. However, these intervals are often hard for naïve users to identify, because of the vague definition of "significant values". In this article, we develop a novel penalty-based method to derive FPCs that are only nonzero precisely in the intervals where the values of FPCs are significant, whence the derived FPCs possess better interpretability than the FPCs derived from existing methods. To compute the proposed FPCs, we devise an efficient algorithm based on projection deflation techniques. We show that the proposed interpretable FPCs are strongly consistent and asymptotically normal under mild conditions. Simulation studies confirm that with a competitive performance in explaining variations of sample curves, the proposed FPCs are more interpretable than the traditional counterparts. This advantage is demonstrated by analyzing two real datasets, namely, electroencephalography data and Canadian weather data. © 2015, The International Biometric Society.

  13. A minimally sufficient model for rib proximal-distal patterning based on genetic analysis and agent-based simulations

    PubMed Central

    Mah, In Kyoung

    2017-01-01

    For decades, the mechanism of skeletal patterning along a proximal-distal axis has been an area of intense inquiry. Here, we examine the development of the ribs, simple structures that in most terrestrial vertebrates consist of two skeletal elements—a proximal bone and a distal cartilage portion. While the ribs have been shown to arise from the somites, little is known about how the two segments are specified. During our examination of genetically modified mice, we discovered a series of progressively worsening phenotypes that could not be easily explained. Here, we combine genetic analysis of rib development with agent-based simulations to conclude that proximal-distal patterning and outgrowth could occur based on simple rules. In our model, specification occurs during somite stages due to varying Hedgehog protein levels, while later expansion refines the pattern. This framework is broadly applicable for understanding the mechanisms of skeletal patterning along a proximal-distal axis. PMID:29068314

  14. [Mammographic screening. An analysis of the characteristics of interval carcinomas observed in the program in the province of Firenze (1989-1991)].

    PubMed

    Ciatto, S; Rosselli del Turco, M; Bonardi, R; Bianchi, S

    1994-04-01

    The authors evaluated 30 interval cancers consecutively observed from 1989 to 1991 and compared them to 98 screening-detected cancers observed in the same period. Interval cancers have a more advanced stage (stage I = 13 lesions, stage II + = 17 lesions) with respect to screening-detected cancers (stage 0 = 10 lesions, stage I = 61 lesions, stage II + = 27 lesions). This finding seems unrelated to an intrinsically higher aggressivity of interval cancers (length biased sampling) which do not differ significantly from screening-detected cancers as far as histopathologic characteristics of prognostic value are concerned. Diagnostic delay due to technical or reading error (9 cases), to radiologically occult cancer in clear (10 cases) or dense parenchymal areas (11 cases) is most likely. This seems to be confirmed by the low frequency observed among interval cancers of easily visible lesions such as isolated microcalcifications (3% vs. 35%) or stellate opacities (13% vs. 31%), and by the higher frequency of opacities with irregular margins (57% vs. 26%) which are more likely masked by dense parenchyma. The chances of reducing interval cancer rate by attempting to increase sensitivity or by increasing screening frequency are discussed, as well as the possible negative consequences of such protocols in terms of cost-effectiveness.

  15. Synchronization controller design of two coupling permanent magnet synchronous motors system with nonlinear constraints.

    PubMed

    Deng, Zhenhua; Shang, Jing; Nian, Xiaohong

    2015-11-01

    In this paper, two coupling permanent magnet synchronous motors system with nonlinear constraints is studied. First of all, the mathematical model of the system is established according to the engineering practices, in which the dynamic model of motor and the nonlinear coupling effect between two motors are considered. In order to keep the two motors synchronization, a synchronization controller based on load observer is designed via cross-coupling idea and interval matrix. Moreover, speed, position and current signals of two motor all are taken as self-feedback signal as well as cross-feedback signal in the proposed controller, which is conducive to improving the dynamical performance and the synchronization performance of the system. The proposed control strategy is verified by simulation via Matlab/Simulink program. The simulation results show that the proposed control method has a better control performance, especially synchronization performance, than that of the conventional PI controller. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  16. Linearity can account for the similarity among conventional, frequency-doubling, and gabor-based perimetric tests in the glaucomatous macula.

    PubMed

    Sun, Hao; Dul, Mitchell W; Swanson, William H

    2006-07-01

    The purposes of this study are to compare macular perimetric sensitivities for conventional size III, frequency-doubling, and Gabor stimuli in terms of Weber contrast and to provide a theoretical interpretation of the results. Twenty-two patients with glaucoma performed four perimetric tests: a conventional Swedish Interactive Threshold Algorithm (SITA) 10-2 test with Goldmann size III stimuli, two frequency-doubling tests (FDT 10-2, FDT Macula) with counterphase-modulated grating stimuli, and a laboratory-designed test with Gabor stimuli. Perimetric sensitivities were converted to the reciprocal of Weber contrast and sensitivities from different tests were compared using the Bland-Altman method. Effects of ganglion cell loss on perimetric sensitivities were then simulated with a two-stage neural model. The average perimetric loss was similar for all stimuli until advanced stages of ganglion cell loss, in which perimetric loss tended to be greater for size III stimuli than for frequency-doubling and Gabor stimuli. Comparison of the experimental data and model simulation suggests that, in the macula, linear relations between ganglion cell loss and perimetric sensitivity loss hold for all three stimuli. Linear relations between perimetric loss and ganglion cell loss for all three stimuli can account for the similarity in perimetric loss until advanced stages. The results do not support the hypothesis that redundancy for frequency-doubling stimuli is lower than redundancy for size III stimuli.

  17. Clinical Placement Before or After Simulated Learning Environments?: A Naturalistic Study of Clinical Skills Acquisition Among Early-Stage Paramedicine Students.

    PubMed

    Mills, Brennen W; Carter, Owen B J; Rudd, Cobie J; Ross, Nathan P; Claxton, Louise A

    2015-10-01

    There is conflicting evidence surrounding the merit of clinical placements (CPs) for early-stage health-profession students. Some contend that early-stage CPs facilitate contextualization of a subsequently learned theory. Others argue that training in simulated-learning experiences (SLEs) should occur before CP to ensure that students possess at least basic competency. We sought to investigate both claims. First-year paramedicine students (n = 85) undertook 3 days of CP and SLEs as part of course requirements. Students undertook CP either before or after participation in SLEs creating 2 groups (Clin → Sim/Sim → Clin). Clinical skills acquisition was measured via direct scenario-based clinical assessments with expert observers conducted at 4 intervals during the semester. Perceptions of difficulty of CP and SLE were measured via the National Aeronautics and Space Administration Task Load Index. Students' clinical assessment scores in both groups improved significantly from beginning to end of semester (P < 0.001). However, at semester's end, clinical assessment scores for the Sim → Clin group were statistically significantly greater than those of the Clin → Sim group (P = 0.021). Both groups found SLEs more demanding than CP (P < 0.001). However, compared with the Sim → Clin group, the Clin → Sim group rated SLE as substantially more time-demanding than CP (P = 0.003). Differences in temporal demand suggest that the Clin → Sim students had fewer opportunities to practice clinical skills during CP than the Sim → Clin students due to a more limited scope of practice. The Sim → Clin students contextualized SLE within subsequent CP resulting in greater improvement in clinical competency by semester's end in comparison with the Clin → Sim students who were forced to contextualize skills retrospectively.

  18. Construction of prediction intervals for Palmer Drought Severity Index using bootstrap

    NASA Astrophysics Data System (ADS)

    Beyaztas, Ufuk; Bickici Arikan, Bugrayhan; Beyaztas, Beste Hamiye; Kahya, Ercan

    2018-04-01

    In this study, we propose an approach based on the residual-based bootstrap method to obtain valid prediction intervals using monthly, short-term (three-months) and mid-term (six-months) drought observations. The effects of North Atlantic and Arctic Oscillation indexes on the constructed prediction intervals are also examined. Performance of the proposed approach is evaluated for the Palmer Drought Severity Index (PDSI) obtained from Konya closed basin located in Central Anatolia, Turkey. The finite sample properties of the proposed method are further illustrated by an extensive simulation study. Our results revealed that the proposed approach is capable of producing valid prediction intervals for future PDSI values.

  19. Sequential parallel comparison design with binary and time-to-event outcomes.

    PubMed

    Silverman, Rachel Kloss; Ivanova, Anastasia; Fine, Jason

    2018-04-30

    Sequential parallel comparison design (SPCD) has been proposed to increase the likelihood of success of clinical trials especially trials with possibly high placebo effect. Sequential parallel comparison design is conducted with 2 stages. Participants are randomized between active therapy and placebo in stage 1. Then, stage 1 placebo nonresponders are rerandomized between active therapy and placebo. Data from the 2 stages are pooled to yield a single P value. We consider SPCD with binary and with time-to-event outcomes. For time-to-event outcomes, response is defined as a favorable event prior to the end of follow-up for a given stage of SPCD. We show that for these cases, the usual test statistics from stages 1 and 2 are asymptotically normal and uncorrelated under the null hypothesis, leading to a straightforward combined testing procedure. In addition, we show that the estimators of the treatment effects from the 2 stages are asymptotically normal and uncorrelated under the null and alternative hypothesis, yielding confidence interval procedures with correct coverage. Simulations and real data analysis demonstrate the utility of the binary and time-to-event SPCD. Copyright © 2018 John Wiley & Sons, Ltd.

  20. The Photochemical Reflectance Index from Directional Cornfield Reflectances: Observations and Simulations

    NASA Technical Reports Server (NTRS)

    Cheng, Yen-Ben; Middleton, Elizabeth M.; Zhang, Qingyuan; Corp, Lawrence A.; Dandois, Jonathan; Kustas, William P.

    2012-01-01

    The two-layer Markov chain Analytical Canopy Reflectance Model (ACRM) was linked with in situ hyperspectral leaf optical properties to simulate the Photochemical Reflectance Index (PRI) for a corn crop canopy at three different growth stages. This is an extended study after a successful demonstration of PRI simulations for a cornfield previously conducted at an early vegetative growth stage. Consistent with previous in situ studies, sunlit leaves exhibited lower PRI values than shaded leaves. Since sunlit (shaded) foliage dominates the canopy in the reflectance hotspot (coldspot), the canopy PRI derived from field hyperspectral observations displayed sensitivity to both view zenith angle and relative azimuth angle at all growth stages. Consequently, sunlit and shaded canopy sectors were most differentiated when viewed along the azimuth matching the solar principal plane. These directional PRI responses associated with sunlit/shaded foliage were successfully reproduced by the ACRM. As before, the simulated PRI values from the current study were closer to in situ values when both sunlit and shaded leaves were utilized as model input data in a two-layer mode, instead of a one-layer mode with sunlit leaves only. Model performance as judged by correlation between in situ and simulated values was strongest for the mature corn crop (r = 0.87, RMSE = 0.0048), followed by the early vegetative stage (r = 0.78; RMSE = 0.0051) and the early senescent stage (r = 0.65; RMSE = 0.0104). Since the benefit of including shaded leaves in the scheme varied across different growth stages, a further analysis was conducted to investigate how variable fractions of sunlit/shaded leaves affect the canopy PRI values expected for a cornfield, with implications for 20 remote sensing monitoring options. Simulations of the sunlit to shaded canopy ratio near 50/50 +/- 10 (e.g., 60/40) matching field observations at all growth stages were examined. Our results suggest in the importance of the sunlit/shaded fraction and canopy structure in understanding and interpreting PRI.

  1. A Study of Fan Stage/Casing Interaction Models

    NASA Technical Reports Server (NTRS)

    Lawrence, Charles; Carney, Kelly; Gallardo, Vicente

    2003-01-01

    The purpose of the present study is to investigate the performance of several existing and new, blade-case interactions modeling capabilities that are compatible with the large system simulations used to capture structural response during blade-out events. Three contact models are examined for simulating the interactions between a rotor bladed disk and a case: a radial and linear gap element and a new element based on a hydrodynamic formulation. The first two models are currently available in commercial finite element codes such as NASTRAN and have been showed to perform adequately for simulating rotor-case interactions. The hydrodynamic model, although not readily available in commercial codes, may prove to be better able to characterize rotor-case interactions.

  2. New thermal neutron calibration channel at LNMRI/IRD

    NASA Astrophysics Data System (ADS)

    Astuto, A.; Patrão, K. C. S.; Fonseca, E. S.; Pereira, W. W.; Lopes, R. T.

    2016-07-01

    A new standard thermal neutron flux unit was designed in the National Ionizing Radiation Metrology Laboratory (LNMRI) for calibration of neutron detectors. Fluence is achieved by moderation of four 241Am-Be sources with 0.6 TBq each, in a facility built with graphite and paraffin blocks. The study was divided into two stages. First, simulations were performed using MCNPX code in different geometric arrangements, seeking the best performance in terms of fluence and their uncertainties. Last, the system was assembled based on the results obtained on the simulations. The simulation results indicate quasi-homogeneous fluence in the central chamber and H*(10) at 50 cm from the front face with the polyethylene filter.

  3. Determining transport coefficients for a microscopic simulation of a hadron gas

    NASA Astrophysics Data System (ADS)

    Pratt, Scott; Baez, Alexander; Kim, Jane

    2017-02-01

    Quark-gluon plasmas produced in relativistic heavy-ion collisions quickly expand and cool, entering a phase consisting of multiple interacting hadronic resonances just below the QCD deconfinement temperature, T ˜155 MeV. Numerical microscopic simulations have emerged as the principal method for modeling the behavior of the hadronic stage of heavy-ion collisions, but the transport properties that characterize these simulations are not well understood. Methods are presented here for extracting the shear viscosity and two transport parameters that emerge in Israel-Stewart hydrodynamics. The analysis is based on studying how the stress-energy tensor responds to velocity gradients. Results are consistent with Kubo relations if viscous relaxation times are twice the collision time.

  4. Intramedullary rod and cement static spacer construct in chronically infected total knee arthroplasty.

    PubMed

    Kotwal, Suhel Y; Farid, Yasser R; Patil, Suresh S; Alden, Kris J; Finn, Henry A

    2012-02-01

    Two-stage reimplantation, with interval antibiotic-impregnated cement spacer, is the preferred treatment of prosthetic knee joint infections. In medically compromised hosts with prior failed surgeries, the outcomes are poor. Articulating spacers in such patients render the knee unstable; static spacers have risks of dislocation and extensor mechanism injury. We examined 58 infected total knee arthroplasties with extensive bone and soft tissue loss, treated with resection arthroplasty and intramedullary tibiofemoral rod and antibiotic-laden cement spacer. Thirty-seven patients underwent delayed reimplantation. Most patients (83.8%) were free from recurrent infection at mean follow-up of 29.4 months. Reinfection occurred in 16.2%, which required debridement. Twenty-one patients with poor operative risks remained with the spacer for 11.4 months. All patients, during spacer phase, had brace-free ambulation with simulated tibiofemoral fusion, without bone loss or loss of limb length. Copyright © 2012 Elsevier Inc. All rights reserved.

  5. Thermal analysis of the vortex tube based thermocycler for fast DNA amplification: Experimental and two-dimensional numerical results

    NASA Astrophysics Data System (ADS)

    Raghavan, V.; Whitney, Scott E.; Ebmeier, Ryan J.; Padhye, Nisha V.; Nelson, Michael; Viljoen, Hendrik J.; Gogos, George

    2006-09-01

    In this article, experimental and numerical analyses to investigate the thermal control of an innovative vortex tube based polymerase chain reaction (VT-PCR) thermocycler are described. VT-PCR is capable of rapid DNA amplification and real-time optical detection. The device rapidly cycles six 20μl 96bp λ-DNA samples between the PCR stages (denaturation, annealing, and elongation) for 30cycles in approximately 6min. Two-dimensional numerical simulations have been carried out using computational fluid dynamics (CFD) software FLUENT v.6.2.16. Experiments and CFD simulations have been carried out to measure/predict the temperature variation between the samples and within each sample. Heat transfer rate (primarily dictated by the temperature differences between the samples and the external air heating or cooling them) governs the temperature distribution between and within the samples. Temperature variation between and within the samples during the denaturation stage has been quite uniform (maximum variation around ±0.5 and 1.6°C, respectively). During cooling, by adjusting the cold release valves in the VT-PCR during some stage of cooling, the heat transfer rate has been controlled. Improved thermal control, which increases the efficiency of the PCR process, has been obtained both experimentally and numerically by slightly decreasing the rate of cooling. Thus, almost uniform temperature distribution between and within the samples (within 1°C) has been attained for the annealing stage as well. It is shown that the VT-PCR is a fully functional PCR machine capable of amplifying specific DNA target sequences in less time than conventional PCR devices.

  6. Neutron coincidence counting based on time interval analysis with one- and two-dimensional Rossi-alpha distributions: an application for passive neutron waste assay

    NASA Astrophysics Data System (ADS)

    Bruggeman, M.; Baeten, P.; De Boeck, W.; Carchon, R.

    1996-02-01

    Neutron coincidence counting is commonly used for the non-destructive assay of plutonium bearing waste or for safeguards verification measurements. A major drawback of conventional coincidence counting is related to the fact that a valid calibration is needed to convert a neutron coincidence count rate to a 240Pu equivalent mass ( 240Pu eq). In waste assay, calibrations are made for representative waste matrices and source distributions. The actual waste however may have quite different matrices and source distributions compared to the calibration samples. This often results in a bias of the assay result. This paper presents a new neutron multiplicity sensitive coincidence counting technique including an auto-calibration of the neutron detection efficiency. The coincidence counting principle is based on the recording of one- and two-dimensional Rossi-alpha distributions triggered respectively by pulse pairs and by pulse triplets. Rossi-alpha distributions allow an easy discrimination between real and accidental coincidences and are aimed at being measured by a PC-based fast time interval analyser. The Rossi-alpha distributions can be easily expressed in terms of a limited number of factorial moments of the neutron multiplicity distributions. The presented technique allows an unbiased measurement of the 240Pu eq mass. The presented theory—which will be indicated as Time Interval Analysis (TIA)—is complementary to Time Correlation Analysis (TCA) theories which were developed in the past, but is from the theoretical point of view much simpler and allows a straightforward calculation of deadtime corrections and error propagation. Analytical expressions are derived for the Rossi-alpha distributions as a function of the factorial moments of the efficiency dependent multiplicity distributions. The validity of the proposed theory is demonstrated and verified via Monte Carlo simulations of pulse trains and the subsequent analysis of the simulated data.

  7. An estimator of the survival function based on the semi-Markov model under dependent censorship.

    PubMed

    Lee, Seung-Yeoun; Tsai, Wei-Yann

    2005-06-01

    Lee and Wolfe (Biometrics vol. 54 pp. 1176-1178, 1998) proposed the two-stage sampling design for testing the assumption of independent censoring, which involves further follow-up of a subset of lost-to-follow-up censored subjects. They also proposed an adjusted estimator for the survivor function for a proportional hazards model under the dependent censoring model. In this paper, a new estimator for the survivor function is proposed for the semi-Markov model under the dependent censorship on the basis of the two-stage sampling data. The consistency and the asymptotic distribution of the proposed estimator are derived. The estimation procedure is illustrated with an example of lung cancer clinical trial and simulation results are reported of the mean squared errors of estimators under a proportional hazards and two different nonproportional hazards models.

  8. An adaptive two-stage dose-response design method for establishing proof of concept.

    PubMed

    Franchetti, Yoko; Anderson, Stewart J; Sampson, Allan R

    2013-01-01

    We propose an adaptive two-stage dose-response design where a prespecified adaptation rule is used to add and/or drop treatment arms between the stages. We extend the multiple comparison procedures-modeling (MCP-Mod) approach into a two-stage design. In each stage, we use the same set of candidate dose-response models and test for a dose-response relationship or proof of concept (PoC) via model-associated statistics. The stage-wise test results are then combined to establish "global" PoC using a conditional error function. Our simulation studies showed good and more robust power in our design method compared to conventional and fixed designs.

  9. Relaxing the rule of ten events per variable in logistic and Cox regression.

    PubMed

    Vittinghoff, Eric; McCulloch, Charles E

    2007-03-15

    The rule of thumb that logistic and Cox models should be used with a minimum of 10 outcome events per predictor variable (EPV), based on two simulation studies, may be too conservative. The authors conducted a large simulation study of other influences on confidence interval coverage, type I error, relative bias, and other model performance measures. They found a range of circumstances in which coverage and bias were within acceptable levels despite less than 10 EPV, as well as other factors that were as influential as or more influential than EPV. They conclude that this rule can be relaxed, in particular for sensitivity analyses undertaken to demonstrate adequate control of confounding.

  10. Spectrum Sharing Based on a Bertrand Game in Cognitive Radio Sensor Networks

    PubMed Central

    Zeng, Biqing; Zhang, Chi; Hu, Pianpian; Wang, Shengyu

    2017-01-01

    In the study of power control and allocation based on pricing, the utility of secondary users is usually studied from the perspective of the signal to noise ratio. The study of secondary user utility from the perspective of communication demand can not only promote the secondary users to meet the maximum communication needs, but also to maximize the utilization of spectrum resources, however, research in this area is lacking, so from the viewpoint of meeting the demand of network communication, this paper designs a two stage model to solve spectrum leasing and allocation problem in cognitive radio sensor networks (CRSNs). In the first stage, the secondary base station collects the secondary network communication requirements, and rents spectrum resources from several primary base stations using the Bertrand game to model the transaction behavior of the primary base station and secondary base station. The second stage, the subcarriers and power allocation problem of secondary base stations is defined as a nonlinear programming problem to be solved based on Nash bargaining. The simulation results show that the proposed model can satisfy the communication requirements of each user in a fair and efficient way compared to other spectrum sharing schemes. PMID:28067850

  11. Optimization of Boiling Water Reactor Loading Pattern Using Two-Stage Genetic Algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kobayashi, Yoko; Aiyoshi, Eitaro

    2002-10-15

    A new two-stage optimization method based on genetic algorithms (GAs) using an if-then heuristic rule was developed to generate optimized boiling water reactor (BWR) loading patterns (LPs). In the first stage, the LP is optimized using an improved GA operator. In the second stage, an exposure-dependent control rod pattern (CRP) is sought using GA with an if-then heuristic rule. The procedure of the improved GA is based on deterministic operators that consist of crossover, mutation, and selection. The handling of the encoding technique and constraint conditions by that GA reflects the peculiar characteristics of the BWR. In addition, strategies suchmore » as elitism and self-reproduction are effectively used in order to improve the search speed. The LP evaluations were performed with a three-dimensional diffusion code that coupled neutronic and thermal-hydraulic models. Strong axial heterogeneities and constraints dependent on three dimensions have always necessitated the use of three-dimensional core simulators for BWRs, so that optimization of computational efficiency is required. The proposed algorithm is demonstrated by successfully generating LPs for an actual BWR plant in two phases. One phase is only LP optimization applying the Haling technique. The other phase is an LP optimization that considers the CRP during reactor operation. In test calculations, candidates that shuffled fresh and burned fuel assemblies within a reasonable computation time were obtained.« less

  12. Does cemented or cementless single-stage exchange arthroplasty of chronic periprosthetic hip infections provide similar infection rates to a two-stage? A systematic review.

    PubMed

    George, D A; Logoluso, N; Castellini, G; Gianola, S; Scarponi, S; Haddad, F S; Drago, L; Romano, C L

    2016-10-10

    The best surgical modality for treating chronic periprosthetic hip infections remains controversial, with a lack of randomised controlled studies. The aim of this systematic review is to compare the infection recurrence rate after a single-stage versus a two-stage exchange arthroplasty, and the rate of cemented versus cementless single-stage exchange arthroplasty for chronic periprosthetic hip infections. We searched for eligible studies published up to December 2015. Full text or abstract in English were reviewed. We included studies reporting the infection recurrence rate as the outcome of interest following single- or two-stage exchange arthroplasty, or both, with a minimum follow-up of 12 months. Two reviewers independently abstracted data and appraised quality assessment. After study selection, 90 observational studies were included. The majority of studies were focused on a two-stage hip exchange arthroplasty (65 %), 18 % on a single-stage exchange, and only a 17 % were comparative studies. There was no statistically significant difference between a single-stage versus a two-stage exchange in terms of recurrence of infection in controlled studies (pooled odds ratio of 1.37 [95 % CI = 0.68-2.74, I 2  = 45.5 %]). Similarly, the recurrence infection rate in cementless versus cemented single-stage hip exchanges failed to demonstrate a significant difference, due to the substantial heterogeneity among the studies. Despite the methodological limitations and the heterogeneity between single cohorts studies, if we considered only the available controlled studies no superiority was demonstrated between a single- and two-stage exchange at a minimum of 12 months follow-up. The overalapping of confidence intervals related to single-stage cementless and cemented hip exchanges, showed no superiority of either technique.

  13. Stochastic flux analysis of chemical reaction networks

    PubMed Central

    2013-01-01

    Background Chemical reaction networks provide an abstraction scheme for a broad range of models in biology and ecology. The two common means for simulating these networks are the deterministic and the stochastic approaches. The traditional deterministic approach, based on differential equations, enjoys a rich set of analysis techniques, including a treatment of reaction fluxes. However, the discrete stochastic simulations, which provide advantages in some cases, lack a quantitative treatment of network fluxes. Results We describe a method for flux analysis of chemical reaction networks, where flux is given by the flow of species between reactions in stochastic simulations of the network. Extending discrete event simulation algorithms, our method constructs several data structures, and thereby reveals a variety of statistics about resource creation and consumption during the simulation. We use these structures to quantify the causal interdependence and relative importance of the reactions at arbitrary time intervals with respect to the network fluxes. This allows us to construct reduced networks that have the same flux-behavior, and compare these networks, also with respect to their time series. We demonstrate our approach on an extended example based on a published ODE model of the same network, that is, Rho GTP-binding proteins, and on other models from biology and ecology. Conclusions We provide a fully stochastic treatment of flux analysis. As in deterministic analysis, our method delivers the network behavior in terms of species transformations. Moreover, our stochastic analysis can be applied, not only at steady state, but at arbitrary time intervals, and used to identify the flow of specific species between specific reactions. Our cases study of Rho GTP-binding proteins reveals the role played by the cyclic reverse fluxes in tuning the behavior of this network. PMID:24314153

  14. Stochastic flux analysis of chemical reaction networks.

    PubMed

    Kahramanoğulları, Ozan; Lynch, James F

    2013-12-07

    Chemical reaction networks provide an abstraction scheme for a broad range of models in biology and ecology. The two common means for simulating these networks are the deterministic and the stochastic approaches. The traditional deterministic approach, based on differential equations, enjoys a rich set of analysis techniques, including a treatment of reaction fluxes. However, the discrete stochastic simulations, which provide advantages in some cases, lack a quantitative treatment of network fluxes. We describe a method for flux analysis of chemical reaction networks, where flux is given by the flow of species between reactions in stochastic simulations of the network. Extending discrete event simulation algorithms, our method constructs several data structures, and thereby reveals a variety of statistics about resource creation and consumption during the simulation. We use these structures to quantify the causal interdependence and relative importance of the reactions at arbitrary time intervals with respect to the network fluxes. This allows us to construct reduced networks that have the same flux-behavior, and compare these networks, also with respect to their time series. We demonstrate our approach on an extended example based on a published ODE model of the same network, that is, Rho GTP-binding proteins, and on other models from biology and ecology. We provide a fully stochastic treatment of flux analysis. As in deterministic analysis, our method delivers the network behavior in terms of species transformations. Moreover, our stochastic analysis can be applied, not only at steady state, but at arbitrary time intervals, and used to identify the flow of specific species between specific reactions. Our cases study of Rho GTP-binding proteins reveals the role played by the cyclic reverse fluxes in tuning the behavior of this network.

  15. Large-scale expensive black-box function optimization

    NASA Astrophysics Data System (ADS)

    Rashid, Kashif; Bailey, William; Couët, Benoît

    2012-09-01

    This paper presents the application of an adaptive radial basis function method to a computationally expensive black-box reservoir simulation model of many variables. An iterative proxy-based scheme is used to tune the control variables, distributed for finer control over a varying number of intervals covering the total simulation period, to maximize asset NPV. The method shows that large-scale simulation-based function optimization of several hundred variables is practical and effective.

  16. Phytoplankton assemblages and lipid biomarkers indicate sea-surface warming and sea-ice decline in the Ross Sea during Marine Isotope sub-Stage 5e

    NASA Astrophysics Data System (ADS)

    Hartman, Julian D.; Sangiorgi, Francesca; Peterse, Francien; Barcena, Maria A.; Albertazzi, Sonia; Asioli, Alessandra; Giglio, Federico; Langone, Leonardo; Tateo, Fabio; Trincardi, Fabio

    2016-04-01

    The Marine Isotope sub-Stage 5e (~ 125 - 119 kyrs BP), the last interglacial period before the present, is believed to have been globally warmer (~ 2°C) than today. Studying this time interval might therefore provide insights into near future climate state given the ongoing climate change and global temperature increase. Of particular interest are the expected changes in polar ice cover. One important aspect of the cryosphere is sea-ice, which influences albedo, deep and surface water currents, and phytoplankton production, and thus affects the global climate system. To investigate whether changes in sea-ice cover occurred in the Southern Ocean close to Antarctica during Marine Isotope sub-Stage 5e dinoflagellate and diatom assemblages have been analyzed in core AS05-10, drilled in the continental slope off the Drygalski basin (Ross Sea) at a water depth of 2377 m. The core was drilled within the frame of the PNRA 2009/A2.01 project, an Italian project with a multidisciplinary approach, and covers the interval from Present to Marine Isotope Stage (MIS) 7. The core stratigraphy is based on diatom bioevents and on the climate cyclicity provided by the variations of the diatom assemblages. For this study we focused on the interval from MIS7 to MIS5. A strong reduction of sea-ice-loving diatom taxa with respect to open water-loving diatom taxa is observed during MIS5. In general the production of phytoplankton increases at the base of MIS5 and then slowly decreases. Dinoflagellate cysts, particularly heterotrophic species, are abundant during MIS5e only. The sea surface temperature reconstruction based on the TEX86L, a proxy based on lipid biomarkers produced by Thaumarcheota, shows a 4°C temperature increase from MIS6 to MIS5e. A slightly smaller temperature increase is observed at the onset of MIS7, but this stage is barren of heterotrophic dinoflagellates. All proxies together seem to indicate that the retreat of the summer sea-ice in the Ross Sea during MIS5e was likely greater than that during MIS7.

  17. Two-stage high frequency pulse tube refrigerator with base temperature below 10 K

    NASA Astrophysics Data System (ADS)

    Chen, Liubiao; Wu, Xianlin; Liu, Sixue; Zhu, Xiaoshuang; Pan, Changzhao; Guo, Jia; Zhou, Yuan; Wang, Junjie

    2017-12-01

    This paper introduces our recent experimental results of pulse tube refrigerator driven by linear compressor. The working frequency is 23-30 Hz, which is much higher than the G-M type cooler (the developed cryocooler will be called high frequency pulse tube refrigerator in this paper). To achieve a temperature below 10 K, two types of two-stage configuration, gas coupled and thermal coupled, have been designed, built and tested. At present, both types can achieve a no-load temperature below 10 K by using only one compressor. As to gas-coupled HPTR, the second stage can achieve a cooling power of 16 mW/10K when the first stage applied a 400 mW heat load at 60 K with a total input power of 400 W. As to thermal-coupled HPTR, the designed cooling power of the first stage is 10W/80K, and then the temperature of the second stage can get a temperature below 10 K with a total input power of 300 W. In the current preliminary experiment, liquid nitrogen is used to replace the first coaxial configuration as the precooling stage, and a no-load temperature 9.6 K can be achieved with a stainless steel mesh regenerator. Using Er3Ni sphere with a diameter about 50-60 micron, the simulation results show it is possible to achieve a temperature below 8 K. The configuration, the phase shifters and the regenerative materials of the developed two types of two-stage high frequency pulse tube refrigerator will be discussed, and some typical experimental results and considerations for achieving a better performance will also be presented in this paper.

  18. Vitrification versus slow freezing gives excellent survival, post warming embryo morphology and pregnancy outcomes for human cleaved embryos.

    PubMed

    Rezazadeh Valojerdi, Mojtaba; Eftekhari-Yazdi, Poopak; Karimian, Leila; Hassani, Fatemeh; Movaghar, Bahar

    2009-06-01

    The objective of this retrospective study was to evaluate the efficacy of vitrification and slow freezing for the cryopreservation of human cleavage stage embryos in terms of post-warming survival rate, post-warming embryo morphology and clinical outcomes. The embryos of 305 patients at cleavage stages were cryopreserved either with vitrification (153 patients) or slow-freezing (152 patients) methods. After warming; the survival rate, post-warmed embryo morphology, clinical pregnancy and implantation rates were evaluated and compared between the two groups. In the vitrification group versus slow freezing group, the survival rate (96.9% vs. 82.8%) and the post-warmed excellent morphology with all blastomeres intact (91.8% vs. 56.2%) were higher with an odds ratio of 6.607 (95% confidence interval; 4.184-10.434) and 8.769 (95% confidence interval; 6.460-11.904), respectively. In this group, the clinical pregnancy rate (40.5% vs. 21.4%) and the implantation rate (16.6% vs. 6.8%) were also higher with an odds ratio of 2.427 (95%confidence interval; 1.461-4.033) and 2.726 (95% confidence interval; 1.837-4.046), respectively. Vitrification in contrast to slow freezing is an efficient method for cryopreservation of human cleavage stage embryos. Vitrification provides a higher survival rate, minimal deleterious effects on post-warming embryo morphology and it can improve clinical outcomes.

  19. Efficient coarse simulation of a growing avascular tumor

    PubMed Central

    Kavousanakis, Michail E.; Liu, Ping; Boudouvis, Andreas G.; Lowengrub, John; Kevrekidis, Ioannis G.

    2013-01-01

    The subject of this work is the development and implementation of algorithms which accelerate the simulation of early stage tumor growth models. Among the different computational approaches used for the simulation of tumor progression, discrete stochastic models (e.g., cellular automata) have been widely used to describe processes occurring at the cell and subcell scales (e.g., cell-cell interactions and signaling processes). To describe macroscopic characteristics (e.g., morphology) of growing tumors, large numbers of interacting cells must be simulated. However, the high computational demands of stochastic models make the simulation of large-scale systems impractical. Alternatively, continuum models, which can describe behavior at the tumor scale, often rely on phenomenological assumptions in place of rigorous upscaling of microscopic models. This limits their predictive power. In this work, we circumvent the derivation of closed macroscopic equations for the growing cancer cell populations; instead, we construct, based on the so-called “equation-free” framework, a computational superstructure, which wraps around the individual-based cell-level simulator and accelerates the computations required for the study of the long-time behavior of systems involving many interacting cells. The microscopic model, e.g., a cellular automaton, which simulates the evolution of cancer cell populations, is executed for relatively short time intervals, at the end of which coarse-scale information is obtained. These coarse variables evolve on slower time scales than each individual cell in the population, enabling the application of forward projection schemes, which extrapolate their values at later times. This technique is referred to as coarse projective integration. Increasing the ratio of projection times to microscopic simulator execution times enhances the computational savings. Crucial accuracy issues arising for growing tumors with radial symmetry are addressed by applying the coarse projective integration scheme in a cotraveling (cogrowing) frame. As a proof of principle, we demonstrate that the application of this scheme yields highly accurate solutions, while preserving the computational savings of coarse projective integration. PMID:22587128

  20. The influence of sampling interval on the accuracy of trail impact assessment

    USGS Publications Warehouse

    Leung, Y.-F.; Marion, J.L.

    1999-01-01

    Trail impact assessment and monitoring (IA&M) programs have been growing in importance and application in recreation resource management at protected areas. Census-based and sampling-based approaches have been developed in such programs, with systematic point sampling being the most common survey design. This paper examines the influence of sampling interval on the accuracy of estimates for selected trail impact problems. A complete census of four impact types on 70 trails in Great Smoky Mountains National Park was utilized as the base data set for the analyses. The census data were resampled at increasing intervals to create a series of simulated point data sets. Estimates of frequency of occurrence and lineal extent for the four impact types were compared with the census data set. The responses of accuracy loss on lineal extent estimates to increasing sampling intervals varied across different impact types, while the responses on frequency of occurrence estimates were consistent, approximating an inverse asymptotic curve. These findings suggest that systematic point sampling may be an appropriate method for estimating the lineal extent but not the frequency of trail impacts. Sample intervals of less than 100 m appear to yield an excellent level of accuracy for the four impact types evaluated. Multiple regression analysis results suggest that appropriate sampling intervals are more likely to be determined by the type of impact in question rather than the length of trail. The census-based trail survey and the resampling-simulation method developed in this study can be a valuable first step in establishing long-term trail IA&M programs, in which an optimal sampling interval range with acceptable accuracy is determined before investing efforts in data collection.

  1. Development of a hydraulic model and flood-inundation maps for the Wabash River near the Interstate 64 Bridge near Grayville, Illinois

    USGS Publications Warehouse

    Boldt, Justin A.

    2018-01-16

    A two-dimensional hydraulic model and digital flood‑inundation maps were developed for a 30-mile reach of the Wabash River near the Interstate 64 Bridge near Grayville, Illinois. The flood-inundation maps, which can be accessed through the U.S. Geological Survey (USGS) Flood Inundation Mapping Science web site at http://water.usgs.gov/osw/flood_inundation/, depict estimates of the areal extent and depth of flooding corresponding to selected water levels (stages) at the USGS streamgage on the Wabash River at Mount Carmel, Ill (USGS station number 03377500). Near-real-time stages at this streamgage may be obtained on the internet from the USGS National Water Information System at http://waterdata.usgs.gov/ or the National Weather Service (NWS) Advanced Hydrologic Prediction Service (AHPS) at http://water.weather.gov/ahps/, which also forecasts flood hydrographs at this site (NWS AHPS site MCRI2). The NWS AHPS forecasts peak stage information that may be used with the maps developed in this study to show predicted areas of flood inundation.Flood elevations were computed for the Wabash River reach by means of a two-dimensional, finite-volume numerical modeling application for river hydraulics. The hydraulic model was calibrated by using global positioning system measurements of water-surface elevation and the current stage-discharge relation at both USGS streamgage 03377500, Wabash River at Mount Carmel, Ill., and USGS streamgage 03378500, Wabash River at New Harmony, Indiana. The calibrated hydraulic model was then used to compute 27 water-surface elevations for flood stages at 1-foot (ft) intervals referenced to the streamgage datum and ranging from less than the action stage (9 ft) to the highest stage (35 ft) of the current stage-discharge rating curve. The simulated water‑surface elevations were then combined with a geographic information system digital elevation model, derived from light detection and ranging data, to delineate the area flooded at each water level.The availability of these maps, along with information on the internet regarding current stage from the USGS streamgage at Mount Carmel, Ill., and forecasted stream stages from the NWS AHPS, provides emergency management personnel and residents with information that is critical for flood-response activities such as evacuations and road closures, as well as for postflood recovery efforts.

  2. Simulation of stationary glow patterns in dielectric barrier discharges at atmospheric pressure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Fucheng, E-mail: hdlfc@hbu.cn; He, Yafeng; Dong, Lifang

    2014-12-15

    Self-organized stationary patterns in dielectric barrier discharges operating in glow regime at atmospheric pressure are investigated by a self-consistent two-dimensional fluid model. The simulation results show that two different modes, namely, the diffuse mode and the static patterned mode, can be formed in different ranges of the driving frequency. The discharge operates in Townsend regime in the diffuse mode, while it operates in a glow regime inside the filaments and in a Townsend regime outside the filaments in the stable pattered mode. The forming process of the stationary filaments can be divided into three stages, namely, destabilizing stage, self-assembling stage,more » and stable stage. The space charge associated with residual electron density and surface charge is responsible for the formation of these stationary glow patterns.« less

  3. Maintenance therapy with toceranib following doxorubicin-based chemotherapy for canine splenic hemangiosarcoma.

    PubMed

    Gardner, Heather L; London, Cheryl A; Portela, Roberta A; Nguyen, Sandra; Rosenberg, Mona P; Klein, Mary K; Clifford, Craig; Thamm, Douglas H; Vail, David M; Bergman, Phil; Crawford-Jakubiak, Martin; Henry, Carolyn; Locke, Jennifer; Garrett, Laura D

    2015-06-11

    Spenic hemangiosarcoma (HSA) in dogs treated with surgery alone is associated with short survival times, and the addition of doxorubicin (DOX) chemotherapy only modestly improves outcome. The purpose of this study was to evaluate the impact of toceranib administration on progression free survival in dogs with stage I or II HSA following splenectomy and single agent DOX chemotherapy. We hypothesized that dogs with splenic HSA treated with adjuvant DOX followed by toceranib would have prolonged disease-free interval (DFI) and overall survival time (OS) when compared to historical dogs treated with DOX-based chemotherapy alone. Dogs with stage I or II splenic HSA were administered 5 cycles of single-agent DOX every 2 weeks beginning within 14 days of splenectomy. Dogs were restaged 2 weeks after completing DOX, and those without evidence of metastatic disease began toceranib therapy at 3.25 mg/kg every other day. Forty-three dogs were enrolled in this clinical trial. Seven dogs had evidence of metastatic disease either before or at re-staging, and an additional 3 dogs were found to have metastatic disease within 1 week of toceranib administration. Therefore 31 dogs went on to receive toceranib following completion of doxorubicin treatment. Twenty-five dogs that received toceranib developed metastatic disease. The median disease free interval for all dogs enrolled in this study (n = 43) was 138 days, and the median disease free interval for those dogs that went on to receive toceranib (n = 31) was 161 days. The median survival time for all dogs enrolled in this study was 169 days, and the median survival time for those dogs that went on to receive toceranib was 172 days. The use of toceranib following DOX chemotherapy does not improve either disease free interval or overall survival in dogs with stage I or II HSA.

  4. Simulation-based modeling of building complexes construction management

    NASA Astrophysics Data System (ADS)

    Shepelev, Aleksandr; Severova, Galina; Potashova, Irina

    2018-03-01

    The study reported here examines the experience in the development and implementation of business simulation games based on network planning and management of high-rise construction. Appropriate network models of different types and levels of detail have been developed; a simulation model including 51 blocks (11 stages combined in 4 units) is proposed.

  5. Fear conditioning is associated with dynamic directed functional interactions between and within the human amygdala, hippocampus, and frontal lobe.

    PubMed

    Liu, C C; Crone, N E; Franaszczuk, P J; Cheng, D T; Schretlen, D S; Lenz, F A

    2011-08-25

    The current model of fear conditioning suggests that it is mediated through modules involving the amygdala (AMY), hippocampus (HIP), and frontal lobe (FL). We now test the hypothesis that habituation and acquisition stages of a fear conditioning protocol are characterized by different event-related causal interactions (ERCs) within and between these modules. The protocol used the painful cutaneous laser as the unconditioned stimulus and ERC was estimated by analysis of local field potentials recorded through electrodes implanted for investigation of epilepsy. During the prestimulus interval of the habituation stage FL>AMY ERC interactions were common. For comparison, in the poststimulus interval of the habituation stage, only a subdivision of the FL (dorsolateral prefrontal cortex, dlPFC) still exerted the FL>AMY ERC interaction (dlFC>AMY). For a further comparison, during the poststimulus interval of the acquisition stage, the dlPFC>AMY interaction persisted and an AMY>FL interaction appeared. In addition to these ERC interactions between modules, the results also show ERC interactions within modules. During the poststimulus interval, HIP>HIP ERC interactions were more common during acquisition, and deep hippocampal contacts exerted causal interactions on superficial contacts, possibly explained by connectivity between the perihippocampal gyrus and the HIP. During the prestimulus interval of the habituation stage, AMY>AMY ERC interactions were commonly found, while interactions between the deep and superficial AMY (indirect pathway) were independent of intervals and stages. These results suggest that the network subserving fear includes distributed or widespread modules, some of which are themselves "local networks." ERC interactions between and within modules can be either static or change dynamically across intervals or stages of fear conditioning. Copyright © 2011 IBRO. Published by Elsevier Ltd. All rights reserved.

  6. The cost-effectiveness of using chronic kidney disease risk scores to screen for early-stage chronic kidney disease.

    PubMed

    Yarnoff, Benjamin O; Hoerger, Thomas J; Simpson, Siobhan K; Leib, Alyssa; Burrows, Nilka R; Shrestha, Sundar S; Pavkov, Meda E

    2017-03-13

    Better treatment during early stages of chronic kidney disease (CKD) may slow progression to end-stage renal disease and decrease associated complications and medical costs. Achieving early treatment of CKD is challenging, however, because a large fraction of persons with CKD are unaware of having this disease. Screening for CKD is one important method for increasing awareness. We examined the cost-effectiveness of identifying persons for early-stage CKD screening (i.e., screening for moderate albuminuria) using published CKD risk scores. We used the CKD Health Policy Model, a micro-simulation model, to simulate the cost-effectiveness of using CKD two published risk scores by Bang et al. and Kshirsagar et al. to identify persons in the US for CKD screening with testing for albuminuria. Alternative risk score thresholds were tested (0.20, 0.15, 0.10, 0.05, and 0.02) above which persons were assigned to receive screening at alternative intervals (1-, 2-, and 5-year) for follow-up screening if the first screening was negative. We examined incremental cost-effectiveness ratios (ICERs), incremental lifetime costs divided by incremental lifetime QALYs, relative to the next higher screening threshold to assess cost-effectiveness. Cost-effective scenarios were determined as those with ICERs less than $50,000 per QALY. Among the cost-effective scenarios, the optimal scenario was determined as the one that resulted in the highest lifetime QALYs. ICERs ranged from $8,823 per QALY to $124,626 per QALY for the Bang et al. risk score and $6,342 per QALY to $405,861 per QALY for the Kshirsagar et al. risk score. The Bang et al. risk score with a threshold of 0.02 and 2-year follow-up screening was found to be optimal because it had an ICER less than $50,000 per QALY and resulted in the highest lifetime QALYs. This study indicates that using these CKD risk scores may allow clinicians to cost-effectively identify a broader population for CKD screening with testing for albuminuria and potentially detect people with CKD at earlier stages of the disease than current approaches of screening only persons with diabetes or hypertension.

  7. Retention of laparoscopic and robotic skills among medical students: a randomized controlled trial.

    PubMed

    Orlando, Megan S; Thomaier, Lauren; Abernethy, Melinda G; Chen, Chi Chiung Grace

    2017-08-01

    Although simulation training beneficially contributes to traditional surgical training, there are less objective data on simulation skills retention. To investigate the retention of laparoscopic and robotic skills after simulation training. We present the second stage of a randomized single-blinded controlled trial in which 40 simulation-naïve medical students were randomly assigned to practice peg transfer tasks on either laparoscopic (N = 20, Fundamentals of Laparoscopic Surgery, Venture Technologies Inc., Waltham, MA) or robotic (N = 20, dV-Trainer, Mimic, Seattle, WA) platforms. In the first stage, two expert surgeons evaluated participants on both tasks before (Stage 1: Baseline) and immediately after training (Stage 1: Post-training) using a modified validated global rating scale of laparoscopic and robotic operative performance. In Stage 2, participants were evaluated on both tasks 11-20 weeks after training. Of the 40 students who participated in Stage 1, 23 (11 laparoscopic and 12 robotic) underwent repeat evaluation. During Stage 2, there were no significant differences between groups in objective or subjective measures for the laparoscopic task. Laparoscopic-trained participants' performances on the laparoscopic task were improved during Stage 2 compared to baseline measured by time to task completion, but not by the modified global rating scale. During the robotic task, the robotic-trained group demonstrated superior economy of motion (p = .017), Tissue Handling (p = .020), and fewer errors (p = .018) compared to the laparoscopic-trained group. Robotic skills acquisition from baseline with no significant deterioration as measured by modified global rating scale scores was observed among robotic-trained participants during Stage 2. Robotic skills acquired through simulation appear to be better maintained than laparoscopic simulation skills. This study is registered on ClinicalTrials.gov (NCT02370407).

  8. Comparison of Controller and Flight Deck Algorithm Performance During Interval Management with Dynamic Arrival Trees (STARS)

    NASA Technical Reports Server (NTRS)

    Battiste, Vernol; Lawton, George; Lachter, Joel; Brandt, Summer; Koteskey, Robert; Dao, Arik-Quang; Kraut, Josh; Ligda, Sarah; Johnson, Walter W.

    2012-01-01

    Managing the interval between arrival aircraft is a major part of the en route and TRACON controller s job. In an effort to reduce controller workload and low altitude vectoring, algorithms have been developed to allow pilots to take responsibility for, achieve and maintain proper spacing. Additionally, algorithms have been developed to create dynamic weather-free arrival routes in the presence of convective weather. In a recent study we examined an algorithm to handle dynamic re-routing in the presence of convective weather and two distinct spacing algorithms. The spacing algorithms originated from different core algorithms; both were enhanced with trajectory intent data for the study. These two algorithms were used simultaneously in a human-in-the-loop (HITL) simulation where pilots performed weather-impacted arrival operations into Louisville International Airport while also performing interval management (IM) on some trials. The controllers retained responsibility for separation and for managing the en route airspace and some trials managing IM. The goal was a stress test of dynamic arrival algorithms with ground and airborne spacing concepts. The flight deck spacing algorithms or controller managed spacing not only had to be robust to the dynamic nature of aircraft re-routing around weather but also had to be compatible with two alternative algorithms for achieving the spacing goal. Flight deck interval management spacing in this simulation provided a clear reduction in controller workload relative to when controllers were responsible for spacing the aircraft. At the same time, spacing was much less variable with the flight deck automated spacing. Even though the approaches taken by the two spacing algorithms to achieve the interval management goals were slightly different they seem to be simpatico in achieving the interval management goal of 130 sec by the TRACON boundary.

  9. Experimental and Numerical Investigation of Guest Molecule Exchange Kinetics based on the 2012 Ignik Sikumi Gas Hydrate Field Trial

    NASA Astrophysics Data System (ADS)

    Ruprecht Yonkofski, C. M.; Horner, J.; White, M. D.

    2015-12-01

    In 2012 the U.S. DOE/NETL, ConocoPhillips Company, and Japan Oil, Gas and Metals National Corporation jointly sponsored the first field trial of injecting a mixture of N2-CO2 into a CH4-hydrate bearing formation beneath the permafrost on the Alaska North Slope. Known as the Ignik Sikumi #1 Gas Hydrate Field Trial, this experiment involved three stages: 1) the injection of a N2-CO2 mixture into a targeted hydrate-bearing layer, 2) a 4-day pressurized soaking period, and 3) a sustained depressurization and fluid production period. Data collected during the three stages of the field trial were made available after a thorough quality check. The Ignik Sikumi #1 data set is extensive, but contains no direct evidence of the guest-molecule exchange process. This study uses numerical simulation to provide an interpretation of the CH4/CO2/N2 guest molecule exchange process that occurred at Ignik Sikumi #1. Simulations were further informed by experimental observations. The goal of the scoping experiments was to understand kinetic exchange rates and develop parameters for use in Iġnik Sikumi history match simulations. The experimental procedure involves two main stages: 1) the formation of CH4 hydrate in a consolidated sand column at 750 psi and 2°C and 2) flow-through of a 77.5/22.5 N2/CO2 molar ratio gas mixture across the column. Experiments were run both above and below the hydrate stability zone in order to observe exchange behavior across varying conditions. The numerical simulator, STOMP-HYDT-KE, was then used to match experimental results, specifically fitting kinetic behavior. Once this behavior is understood, it can be applied to field scale models based on Ignik Sikumi #1.

  10. Simple Method to Estimate Mean Heart Dose From Hodgkin Lymphoma Radiation Therapy According to Simulation X-Rays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nimwegen, Frederika A. van; Cutter, David J.; Oxford Cancer Centre, Oxford University Hospitals NHS Trust, Oxford

    Purpose: To describe a new method to estimate the mean heart dose for Hodgkin lymphoma patients treated several decades ago, using delineation of the heart on radiation therapy simulation X-rays. Mean heart dose is an important predictor for late cardiovascular complications after Hodgkin lymphoma (HL) treatment. For patients treated before the era of computed tomography (CT)-based radiotherapy planning, retrospective estimation of radiation dose to the heart can be labor intensive. Methods and Materials: Patients for whom cardiac radiation doses had previously been estimated by reconstruction of individual treatments on representative CT data sets were selected at random from a case–controlmore » study of 5-year Hodgkin lymphoma survivors (n=289). For 42 patients, cardiac contours were outlined on each patient's simulation X-ray by 4 different raters, and the mean heart dose was estimated as the percentage of the cardiac contour within the radiation field multiplied by the prescribed mediastinal dose and divided by a correction factor obtained by comparison with individual CT-based dosimetry. Results: According to the simulation X-ray method, the medians of the mean heart doses obtained from the cardiac contours outlined by the 4 raters were 30 Gy, 30 Gy, 31 Gy, and 31 Gy, respectively, following prescribed mediastinal doses of 25-42 Gy. The absolute-agreement intraclass correlation coefficient was 0.93 (95% confidence interval 0.85-0.97), indicating excellent agreement. Mean heart dose was 30.4 Gy with the simulation X-ray method, versus 30.2 Gy with the representative CT-based dosimetry, and the between-method absolute-agreement intraclass correlation coefficient was 0.87 (95% confidence interval 0.80-0.95), indicating good agreement between the two methods. Conclusion: Estimating mean heart dose from radiation therapy simulation X-rays is reproducible and fast, takes individual anatomy into account, and yields results comparable to the labor-intensive representative CT-based method. This simpler method may produce a meaningful measure of mean heart dose for use in studies of late cardiac complications.« less

  11. Simple method to estimate mean heart dose from Hodgkin lymphoma radiation therapy according to simulation X-rays.

    PubMed

    van Nimwegen, Frederika A; Cutter, David J; Schaapveld, Michael; Rutten, Annemarieke; Kooijman, Karen; Krol, Augustinus D G; Janus, Cécile P M; Darby, Sarah C; van Leeuwen, Flora E; Aleman, Berthe M P

    2015-05-01

    To describe a new method to estimate the mean heart dose for Hodgkin lymphoma patients treated several decades ago, using delineation of the heart on radiation therapy simulation X-rays. Mean heart dose is an important predictor for late cardiovascular complications after Hodgkin lymphoma (HL) treatment. For patients treated before the era of computed tomography (CT)-based radiotherapy planning, retrospective estimation of radiation dose to the heart can be labor intensive. Patients for whom cardiac radiation doses had previously been estimated by reconstruction of individual treatments on representative CT data sets were selected at random from a case-control study of 5-year Hodgkin lymphoma survivors (n=289). For 42 patients, cardiac contours were outlined on each patient's simulation X-ray by 4 different raters, and the mean heart dose was estimated as the percentage of the cardiac contour within the radiation field multiplied by the prescribed mediastinal dose and divided by a correction factor obtained by comparison with individual CT-based dosimetry. According to the simulation X-ray method, the medians of the mean heart doses obtained from the cardiac contours outlined by the 4 raters were 30 Gy, 30 Gy, 31 Gy, and 31 Gy, respectively, following prescribed mediastinal doses of 25-42 Gy. The absolute-agreement intraclass correlation coefficient was 0.93 (95% confidence interval 0.85-0.97), indicating excellent agreement. Mean heart dose was 30.4 Gy with the simulation X-ray method, versus 30.2 Gy with the representative CT-based dosimetry, and the between-method absolute-agreement intraclass correlation coefficient was 0.87 (95% confidence interval 0.80-0.95), indicating good agreement between the two methods. Estimating mean heart dose from radiation therapy simulation X-rays is reproducible and fast, takes individual anatomy into account, and yields results comparable to the labor-intensive representative CT-based method. This simpler method may produce a meaningful measure of mean heart dose for use in studies of late cardiac complications. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. An adaptive front tracking technique for three-dimensional transient flows

    NASA Astrophysics Data System (ADS)

    Galaktionov, O. S.; Anderson, P. D.; Peters, G. W. M.; van de Vosse, F. N.

    2000-01-01

    An adaptive technique, based on both surface stretching and surface curvature analysis for tracking strongly deforming fluid volumes in three-dimensional flows is presented. The efficiency and accuracy of the technique are demonstrated for two- and three-dimensional flow simulations. For the two-dimensional test example, the results are compared with results obtained using a different tracking approach based on the advection of a passive scalar. Although for both techniques roughly the same structures are found, the resolution for the front tracking technique is much higher. In the three-dimensional test example, a spherical blob is tracked in a chaotic mixing flow. For this problem, the accuracy of the adaptive tracking is demonstrated by the volume conservation for the advected blob. Adaptive front tracking is suitable for simulation of the initial stages of fluid mixing, where the interfacial area can grow exponentially with time. The efficiency of the algorithm significantly benefits from parallelization of the code. Copyright

  13. TIME-INTERVAL MEASURING DEVICE

    DOEpatents

    Gross, J.E.

    1958-04-15

    An electronic device for measuring the time interval between two control pulses is presented. The device incorporates part of a previous approach for time measurement, in that pulses from a constant-frequency oscillator are counted during the interval between the control pulses. To reduce the possible error in counting caused by the operation of the counter gating circuit at various points in the pulse cycle, the described device provides means for successively delaying the pulses for a fraction of the pulse period so that a final delay of one period is obtained and means for counting the pulses before and after each stage of delay during the time interval whereby a plurality of totals is obtained which may be averaged and multplied by the pulse period to obtain an accurate time- Interval measurement.

  14. Numerical study of bandwidth effect on stimulated Raman backscattering in nonlinear regime

    NASA Astrophysics Data System (ADS)

    Zhou, H. Y.; Xiao, C. Z.; Zou, D. B.; Li, X. Z.; Yin, Y.; Shao, F. Q.; Zhuo, H. B.

    2018-06-01

    Nonlinear behaviors of stimulated Raman scattering driven by finite bandwidth pumps are studied by one dimensional particle-in-cell simulations. The broad spectral feature of plasma waves and backscattered light reveals the different coupling and growth mechanisms, which lead to the suppression effect before the deep nonlinear stage. It causes nonperiodic plasma wave packets and reduces packet and etching velocities. Based on the negative frequency shift and electron energy distribution, the long-time evolution of instability can be divided into two stages by the relaxation time. It is a critical time after which the alleviation effects of nonlinear frequency shift and hot electrons are replaced by enhancement. Thus, the broadband pump suppresses instability at early time. However, it aggravates in the deep nonlinear stage by lifting the saturation level due to the coupling of the incident pump with each frequency shifted plasma wave. Our simulation results show that the nonlinear effects are valid in a bandwidth range from 2.25% to 3.0%, and the physics are similar within a nearby parameter space.

  15. Case studies in Bayesian microbial risk assessments.

    PubMed

    Kennedy, Marc C; Clough, Helen E; Turner, Joanne

    2009-12-21

    The quantification of uncertainty and variability is a key component of quantitative risk analysis. Recent advances in Bayesian statistics make it ideal for integrating multiple sources of information, of different types and quality, and providing a realistic estimate of the combined uncertainty in the final risk estimates. We present two case studies related to foodborne microbial risks. In the first, we combine models to describe the sequence of events resulting in illness from consumption of milk contaminated with VTEC O157. We used Monte Carlo simulation to propagate uncertainty in some of the inputs to computer models describing the farm and pasteurisation process. Resulting simulated contamination levels were then assigned to consumption events from a dietary survey. Finally we accounted for uncertainty in the dose-response relationship and uncertainty due to limited incidence data to derive uncertainty about yearly incidences of illness in young children. Options for altering the risk were considered by running the model with different hypothetical policy-driven exposure scenarios. In the second case study we illustrate an efficient Bayesian sensitivity analysis for identifying the most important parameters of a complex computer code that simulated VTEC O157 prevalence within a managed dairy herd. This was carried out in 2 stages, first to screen out the unimportant inputs, then to perform a more detailed analysis on the remaining inputs. The method works by building a Bayesian statistical approximation to the computer code using a number of known code input/output pairs (training runs). We estimated that the expected total number of children aged 1.5-4.5 who become ill due to VTEC O157 in milk is 8.6 per year, with 95% uncertainty interval (0,11.5). The most extreme policy we considered was banning on-farm pasteurisation of milk, which reduced the estimate to 6.4 with 95% interval (0,11). In the second case study the effective number of inputs was reduced from 30 to 7 in the screening stage, and just 2 inputs were found to explain 82.8% of the output variance. A combined total of 500 runs of the computer code were used. These case studies illustrate the use of Bayesian statistics to perform detailed uncertainty and sensitivity analyses, integrating multiple information sources in a way that is both rigorous and efficient.

  16. Development of a paediatric population-based model of the pharmacokinetics of rivaroxaban.

    PubMed

    Willmann, Stefan; Becker, Corina; Burghaus, Rolf; Coboeken, Katrin; Edginton, Andrea; Lippert, Jörg; Siegmund, Hans-Ulrich; Thelen, Kirstin; Mück, Wolfgang

    2014-01-01

    Venous thromboembolism has been increasingly recognised as a clinical problem in the paediatric population. Guideline recommendations for antithrombotic therapy in paediatric patients are based mainly on extrapolation from adult clinical trial data, owing to the limited number of clinical trials in paediatric populations. The oral, direct Factor Xa inhibitor rivaroxaban has been approved in adult patients for several thromboembolic disorders, and its well-defined pharmacokinetic and pharmacodynamic characteristics and efficacy and safety profiles in adults warrant further investigation of this agent in the paediatric population. The objective of this study was to develop and qualify a physiologically based pharmacokinetic (PBPK) model for rivaroxaban doses of 10 and 20 mg in adults and to scale this model to the paediatric population (0-18 years) to inform the dosing regimen for a clinical study of rivaroxaban in paediatric patients. Experimental data sets from phase I studies supported the development and qualification of an adult PBPK model. This adult PBPK model was then scaled to the paediatric population by including anthropometric and physiological information, age-dependent clearance and age-dependent protein binding. The pharmacokinetic properties of rivaroxaban in virtual populations of children were simulated for two body weight-related dosing regimens equivalent to 10 and 20 mg once daily in adults. The quality of the model was judged by means of a visual predictive check. Subsequently, paediatric simulations of the area under the plasma concentration-time curve (AUC), maximum (peak) plasma drug concentration (C max) and concentration in plasma after 24 h (C 24h) were compared with the adult reference simulations. Simulations for AUC, C max and C 24h throughout the investigated age range largely overlapped with values obtained for the corresponding dose in the adult reference simulation for both body weight-related dosing regimens. However, pharmacokinetic values in infants and preschool children (body weight <40 kg) were lower than the 90 % confidence interval threshold of the adult reference model and, therefore, indicated that doses in these groups may need to be increased to achieve the same plasma levels as in adults. For children with body weight between 40 and 70 kg, simulated plasma pharmacokinetic parameters (C max, C 24h and AUC) overlapped with the values obtained in the corresponding adult reference simulation, indicating that body weight-related exposure was similar between these children and adults. In adolescents of >70 kg body weight, the simulated 90 % prediction interval values of AUC and C 24h were much higher than the 90 % confidence interval of the adult reference population, owing to the weight-based simulation approach, but for these patients rivaroxaban would be administered at adult fixed doses of 10 and 20 mg. The paediatric PBPK model developed here allowed an exploratory analysis of the pharmacokinetics of rivaroxaban in children to inform the dosing regimen for a clinical study in paediatric patients.

  17. Unsteady non-Newtonian hydrodynamics in granular gases.

    PubMed

    Astillero, Antonio; Santos, Andrés

    2012-02-01

    The temporal evolution of a dilute granular gas, both in a compressible flow (uniform longitudinal flow) and in an incompressible flow (uniform shear flow), is investigated by means of the direct simulation Monte Carlo method to solve the Boltzmann equation. Emphasis is laid on the identification of a first "kinetic" stage (where the physical properties are strongly dependent on the initial state) subsequently followed by an unsteady "hydrodynamic" stage (where the momentum fluxes are well-defined non-Newtonian functions of the rate of strain). The simulation data are seen to support this two-stage scenario. Furthermore, the rheological functions obtained from simulation are well described by an approximate analytical solution of a model kinetic equation. © 2012 American Physical Society

  18. SIMREL: Software for Coefficient Alpha and Its Confidence Intervals with Monte Carlo Studies

    ERIC Educational Resources Information Center

    Yurdugul, Halil

    2009-01-01

    This article describes SIMREL, a software program designed for the simulation of alpha coefficients and the estimation of its confidence intervals. SIMREL runs on two alternatives. In the first one, if SIMREL is run for a single data file, it performs descriptive statistics, principal components analysis, and variance analysis of the item scores…

  19. Laparoscopic staging for apparent stage I epithelial ovarian cancer.

    PubMed

    Melamed, Alexander; Keating, Nancy L; Clemmer, Joel T; Bregar, Amy J; Wright, Jason D; Boruta, David M; Schorge, John O; Del Carmen, Marcela G; Rauh-Hain, J Alejandro

    2017-01-01

    Whereas advances in minimally invasive surgery have made laparoscopic staging technically feasible in stage I epithelial ovarian cancer, the practice remains controversial because of an absence of randomized trials and lack of high-quality observational studies demonstrating equivalent outcomes. This study seeks to evaluate the association of laparoscopic staging with survival among women with clinical stage I epithelial ovarian cancer. We used the National Cancer Data Base to identify all women who underwent surgical staging for clinical stage I epithelial ovarian cancer diagnosed from 2010 through 2012. The exposure of interest was planned surgical approach (laparoscopy vs laparotomy), and the primary outcome was overall survival. The primary analysis was based on an intention to treat: all women whose procedures were initiated laparoscopically were categorized as having had a planned laparoscopic procedure, regardless of subsequent conversion to laparotomy. We used propensity methods to match patients who underwent planned laparoscopic staging with similar patients who underwent planned laparotomy based on observed characteristics. We compared survival among the matched cohorts using the Kaplan-Meier method and Cox regression. We compared the extent of lymphadenectomy using the Wilcoxon rank-sum test. Among 4798 eligible patients, 1112 (23.2%) underwent procedures that were initiated laparoscopically, of which 190 (17%) were converted to laparotomy. Women who underwent planned laparoscopy were more frequently white, privately insured, from wealthier ZIP codes, received care in community cancer centers, and had smaller tumors that were more frequently of serous and less often of mucinous histology than those who underwent staging via planned laparotomy. After propensity score matching, time to death did not differ between patients undergoing planned laparoscopic vs open staging (hazard ratio, 0.77, 95% confidence interval, 0.54-1.09; P = .13). Planned laparoscopic staging was associated with a slightly higher median lymph node count (14 vs 12, P = .005). Planned laparoscopic staging was not associated with time to death after adjustment for receipt of adjuvant chemotherapy, histological type and grade, and pathological stage (hazard ratio, 0.82, 95% confidence interval, 0.57-1.16). Surgical staging via planned laparoscopy vs laparotomy was not associated with worse survival in women with apparent stage I epithelial ovarian cancer. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Associations Between Maternal Pregravid Obesity and Gestational Diabetes and the Timing of Pubarche in Daughters

    PubMed Central

    Kubo, Ai; Ferrara, Assiamira; Laurent, Cecile A.; Windham, Gayle C.; Greenspan, Louise C.; Deardorff, Julianna; Hiatt, Robert A.; Quesenberry, Charles P.; Kushi, Lawrence H.

    2016-01-01

    Abstract We investigated whether in utero exposure to maternal pregravid obesity and/or gestational diabetes mellitus (GDM) was associated with early puberty in girls. We used data from a longitudinal study of 421 mother-daughter pairs enrolled in an integrated health services organization, Kaiser Permanente Northern California (2005–2012). Girls aged 6–8 years were followed annually through ages 12–14 years. Onset of puberty was assessed using study clinic-based Tanner staging. We examined associations of self-reported pregravid obesity and maternal GDM with timing of the daughter's transition to pubertal maturation stage 2 or above for development of breasts and pubic hair, using accelerated failure time regression models with interval censoring to estimate time ratios and hazard ratios and corresponding 95% confidence intervals. Maternal obesity (pregravid body mass index (BMI; weight (kg)/height (m)2) ≥30) was associated with a daughter's earlier transition to breast and pubic hair stage 2+ in comparison with girls whose mothers had pregravid BMI <25. These associations were attenuated and not statistically significant after adjustment for covariates. Girls whose mothers had both pregravid BMI ≥25 and GDM were at higher risk of an earlier transition to pubic hair stage 2+ than those whose mothers had neither condition (adjusted time ratio = 0.89, 95% confidence interval: 0.83, 0.96; hazard ratio = 2.97, 95% confidence interval: 1.52, 5.83). These findings suggest that exposure to maternal obesity and hyperglycemia places girls at higher risk of earlier pubarche. PMID:27268032

  1. Research on Rigid Body Motion Tracing in Space based on NX MCD

    NASA Astrophysics Data System (ADS)

    Wang, Junjie; Dai, Chunxiang; Shi, Karen; Qin, Rongkang

    2018-03-01

    In the use of MCD (Mechatronics Concept Designer) which is a module belong to SIEMENS Ltd industrial design software UG (Unigraphics NX), user can define rigid body and kinematic joint to make objects move according to the existing plan in simulation. At this stage, user may have the desire to see the path of some points in the moving object intuitively. In response to this requirement, this paper will compute the pose through the transformation matrix which can be available from the solver engine, and then fit these sampling points through B-spline curve. Meanwhile, combined with the actual constraints of rigid bodies, the traditional equal interval sampling strategy was optimized. The result shown that this method could satisfy the demand and make up for the deficiency in traditional sampling method. User can still edit and model on this 3D curve. Expected result has been achieved.

  2. Progressive fracture of polymer matrix composite structures: A new approach

    NASA Technical Reports Server (NTRS)

    Chamis, C. C.; Murthy, P. L. N.; Minnetyan, L.

    1992-01-01

    A new approach independent of stress intensity factors and fracture toughness parameters has been developed and is described for the computational simulation of progressive fracture of polymer matrix composite structures. The damage stages are quantified based on physics via composite mechanics while the degradation of the structural behavior is quantified via the finite element method. The approach account for all types of composite behavior, structures, load conditions, and fracture processes starting from damage initiation, to unstable propagation and to global structural collapse. Results of structural fracture in composite beams, panels, plates, and shells are presented to demonstrate the effectiveness and versatility of this new approach. Parameters and guidelines are identified which can be used as criteria for structural fracture, inspection intervals, and retirement for cause. Generalization to structures made of monolithic metallic materials are outlined and lessons learned in undertaking the development of new approaches, in general, are summarized.

  3. Quantifying Safety Margin Using the Risk-Informed Safety Margin Characterization (RISMC)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grabaskas, David; Bucknor, Matthew; Brunett, Acacia

    2015-04-26

    The Risk-Informed Safety Margin Characterization (RISMC), developed by Idaho National Laboratory as part of the Light-Water Reactor Sustainability Project, utilizes a probabilistic safety margin comparison between a load and capacity distribution, rather than a deterministic comparison between two values, as is usually done in best-estimate plus uncertainty analyses. The goal is to determine the failure probability, or in other words, the probability of the system load equaling or exceeding the system capacity. While this method has been used in pilot studies, there has been little work conducted investigating the statistical significance of the resulting failure probability. In particular, it ismore » difficult to determine how many simulations are necessary to properly characterize the failure probability. This work uses classical (frequentist) statistics and confidence intervals to examine the impact in statistical accuracy when the number of simulations is varied. Two methods are proposed to establish confidence intervals related to the failure probability established using a RISMC analysis. The confidence interval provides information about the statistical accuracy of the method utilized to explore the uncertainty space, and offers a quantitative method to gauge the increase in statistical accuracy due to performing additional simulations.« less

  4. Gonad morphology, oocyte development and spawning cycle of the calanoid copepod Acartia clausi

    NASA Astrophysics Data System (ADS)

    Eisfeld, Sonja M.; Niehoff, Barbara

    2007-09-01

    Information on gonad morphology and its relation to basic reproductive parameters such as clutch size and spawning frequency is lacking for Acartia clausi, a dominant calanoid copepod of the North Sea. To fill this gap, females of this species were sampled at Helgoland Roads from mid March to late May 2001. Gonad structure and oogenesis were studied using a combination of histology and whole-body-analysis. In addition, clutch size and spawning frequency were determined in incubation experiments, during which individual females were monitored at short intervals for 8 and 12 h, respectively. The histological analysis revealed that the ovary of A. clausi is w-shaped with two distinct tips pointing posteriorly. It is slightly different from that of other Acartia species and of other copepod taxa. From the ovary, two anterior diverticula extend into the head region, and two posterior diverticula extend to the genital opening in the abdomen. Developing oocytes change in shape and size, and in the appearance of the nucleus and the ooplasm. Based on these morphological characteristics, different oocyte development stages (OS) were identified. Mitotically dividing oogonia and young oocytes (OS 0) were restricted to the ovary, whereas vitellogenic oocytes (OS 1 4) were present in the diverticula. The development stage of the oocytes increased with distance to the ovary in both, anterior and posterior diverticula. Most advanced oocytes were situated ventrally, and their number varied between 1 and 18, at a median of 4. All oocyte development stages co-occur indicating that oogenesis in A. clausi is a continuous process. These morphological features reflect the reproductive traits of this species. In accordance with the low numbers of mature oocytes in the gonads, females usually produced small clutches of one to five eggs. Clutches were released throughout the entire observation period at intervals of 90 min (median) resulting in mean egg production rates of 18 28 eggs female-1 day-1.

  5. Cannibalism in discrete-time predator-prey systems.

    PubMed

    Chow, Yunshyong; Jang, Sophia R-J

    2012-01-01

    In this study, we propose and investigate a two-stage population model with cannibalism. It is shown that cannibalism can destabilize and lower the magnitude of the interior steady state. However, it is proved that cannibalism has no effect on the persistence of the population. Based on this model, we study two systems of predator-prey interactions where the prey population is cannibalistic. A sufficient condition based on the nontrivial boundary steady state for which both populations can coexist is derived. It is found via numerical simulations that introduction of the predator population may either stabilize or destabilize the prey dynamics, depending on cannibalism coefficients and other vital parameters.

  6. Children With Medical Complexity: A Web-Based Multimedia Curriculum Assessing Pediatric Residents Across North America.

    PubMed

    Shah, Neha H; Bhansali, Priti; Barber, Aisha; Toner, Keri; Kahn, Michael; MacLean, Meaghan; Kadden, Micah; Sestokas, Jeffrey; Agrawal, Dewesh

    No standardized curricula exist for training residents in the special needs of children with medical complexity. We assessed resident satisfaction, knowledge, and behavior after implementing a novel online curriculum composed of multimedia modules on care of children with medical complexity utilizing virtual simulation. We conducted a randomized controlled trial of residents across North America. A Web-based curriculum of 6 self-paced, interactive, multimedia modules was developed. Readings for each topic served as the control curriculum. Residents were randomized to 1 of 2 groups, each completing 3 modules and 3 sets of readings that were mutually exclusive. Outcomes included resident scores on satisfaction, knowledge-based assessments, and virtual simulation activities. Four hundred forty-two residents from 56 training programs enrolled in the curriculum, 229 of whom completed it and were included in the analysis. Subjects were more likely to report comfort with all topics if they reviewed modules compared to readings (P ≤ .01 for all 6 topics). Posttest knowledge scores were significantly higher than pretest scores overall (mean increase in score 17.7%; 95% confidence interval 16.0, 19.4), and the mean pre-post score increase for modules was significantly higher than readings (20.9% vs 15.4%, P < .001). Mean scores on the verbal handoff virtual simulation increased by 1.1 points (95% confidence interval 0.2, 2.0, P = .02). There were no significant differences found in pre-post performance for the device-related emergency virtual simulation. There was high satisfaction, significant knowledge acquisition, and specific behavior change after participating in this innovative online curriculum. This is the first multisite, randomized trial assessing satisfaction, knowledge impact, and behavior change in a virtually simulated environment with pediatric trainees. Copyright © 2017 Academic Pediatric Association. Published by Elsevier Inc. All rights reserved.

  7. Modeling Relationships Between Flight Crew Demographics and Perceptions of Interval Management

    NASA Technical Reports Server (NTRS)

    Remy, Benjamin; Wilson, Sara R.

    2016-01-01

    The Interval Management Alternative Clearances (IMAC) human-in-the-loop simulation experiment was conducted to assess interval management system performance and participants' acceptability and workload while performing three interval management clearance types. Twenty-four subject pilots and eight subject controllers flew ten high-density arrival scenarios into Denver International Airport during two weeks of data collection. This analysis examined the possible relationships between subject pilot demographics on reported perceptions of interval management in IMAC. Multiple linear regression models were created with a new software tool to predict subject pilot questionnaire item responses from demographic information. General patterns were noted across models that may indicate flight crew demographics influence perceptions of interval management.

  8. Estimating stage-specific daily survival probabilities of nests when nest age is unknown

    USGS Publications Warehouse

    Stanley, T.R.

    2004-01-01

    Estimation of daily survival probabilities of nests is common in studies of avian populations. Since the introduction of Mayfield's (1961, 1975) estimator, numerous models have been developed to relax Mayfield's assumptions and account for biologically important sources of variation. Stanley (2000) presented a model for estimating stage-specific (e.g. incubation stage, nestling stage) daily survival probabilities of nests that conditions on “nest type” and requires that nests be aged when they are found. Because aging nests typically requires handling the eggs, there may be situations where nests can not or should not be aged and the Stanley (2000) model will be inapplicable. Here, I present a model for estimating stage-specific daily survival probabilities that conditions on nest stage for active nests, thereby obviating the need to age nests when they are found. Specifically, I derive the maximum likelihood function for the model, evaluate the model's performance using Monte Carlo simulations, and provide software for estimating parameters (along with an example). For sample sizes as low as 50 nests, bias was small and confidence interval coverage was close to the nominal rate, especially when a reduced-parameter model was used for estimation.

  9. Visual simulation of fatigue crack growth

    NASA Astrophysics Data System (ADS)

    Wang, Shuanzhu; Margolin, Harold; Lin, Fengbao

    1998-07-01

    An attempt has been made to visually simulate fatigue crack propagation from a precrack. An integrated program was developed for this purpose. The crack-tip shape was determined at four load positions in the first load cycle. The final shape was a blunt front with an “ear” profile at the precrack tip. A more general model, schematically illustrating the mechanism of fatigue crack growth and striation formation in a ductile material, was proposed based on this simulation. According to the present model, fatigue crack growth is an intermittent process; cyclic plastic shear strain is the driving force applied to both state I and II crack growth. No fracture mode transition occurs between the two stages in the present study. The crack growth direction alternates, moving up and down successively, producing fatigue striations. A brief examination has been made of the crack growth path in a ductile two-phase material.

  10. Investigation to biodiesel production by the two-step homogeneous base-catalyzed transesterification.

    PubMed

    Ye, Jianchu; Tu, Song; Sha, Yong

    2010-10-01

    For the two-step transesterification biodiesel production made from the sunflower oil, based on the kinetics model of the homogeneous base-catalyzed transesterification and the liquid-liquid phase equilibrium of the transesterification product, the total methanol/oil mole ratio, the total reaction time, and the split ratios of methanol and reaction time between the two reactors in the stage of the two-step reaction are determined quantitatively. In consideration of the transesterification intermediate product, both the traditional distillation separation process and the improved separation process of the two-step reaction product are investigated in detail by means of the rigorous process simulation. In comparison with the traditional distillation process, the improved separation process of the two-step reaction product has distinct advantage in the energy duty and equipment requirement due to replacement of the costly methanol-biodiesel distillation column. Copyright 2010 Elsevier Ltd. All rights reserved.

  11. Using Interval-Based Systems to Measure Behavior in Early Childhood Special Education and Early Intervention

    ERIC Educational Resources Information Center

    Lane, Justin D.; Ledford, Jennifer R.

    2014-01-01

    The purpose of this article is to summarize the current literature on the accuracy and reliability of interval systems using data from previously published experimental studies that used either human observations of behavior or computer simulations. Although multiple comparison studies provided mathematical adjustments or modifications to interval…

  12. Femtosecond timing-jitter between photo-cathode laser and ultra-short electron bunches by means of hybrid compression

    NASA Astrophysics Data System (ADS)

    Pompili, R.; Anania, M. P.; Bellaveglia, M.; Biagioni, A.; Castorina, G.; Chiadroni, E.; Cianchi, A.; Croia, M.; Di Giovenale, D.; Ferrario, M.; Filippi, F.; Gallo, A.; Gatti, G.; Giorgianni, F.; Giribono, A.; Li, W.; Lupi, S.; Mostacci, A.; Petrarca, M.; Piersanti, L.; Di Pirro, G.; Romeo, S.; Scifo, J.; Shpakov, V.; Vaccarezza, C.; Villa, F.

    2016-08-01

    The generation of ultra-short electron bunches with ultra-low timing-jitter relative to the photo-cathode (PC) laser has been experimentally proved for the first time at the SPARC_LAB test-facility (INFN-LNF, Frascati) exploiting a two-stage hybrid compression scheme. The first stage employs RF-based compression (velocity-bunching), which shortens the bunch and imprints an energy chirp on it. The second stage is performed in a non-isochronous dogleg line, where the compression is completed resulting in a final bunch duration below 90 fs (rms). At the same time, the beam arrival timing-jitter with respect to the PC laser has been measured to be lower than 20 fs (rms). The reported results have been validated with numerical simulations.

  13. Radiologic findings of screen-detected cancers in an organized population-based screening mammography program in Turkey

    PubMed Central

    Kayhan, Arda; Arıbal, Erkin; Şahin, Cennet; Taşçı, Ömür Can; Gürdal, Sibel Özkan; Öztürk, Enis; Hatipoğlu, Hayat Halide; Özaydın, Nilüfer; Cabioğlu, Neslihan; Özçınar, Beyza; Özmen, Vahit

    2016-01-01

    PURPOSE Bahçeşehir Breast Cancer Screening Program is a population based organized screening program in Turkey, where asymptomatic women aged 40–69 years are screened biannually. In this prospective study, we aimed to determine the mammographic findings of screen-detected cancers and discuss the efficacy of breast cancer screening in a developing country. METHODS A total of 6912 women were screened in three rounds. The radiologic findings were grouped as mass, focal asymmetry, calcification, and architectural distortion. Masses were classified according to shape, border, and density. Calcifications were grouped according to morphology and distribution. Cancers were grouped according to the clinical stage. RESULTS Seventy cancers were detected with an incidence of 4.8/1000. Two cancers were detected in other centers and three were not visualized mammographically. Mammographic presentations of the remaining 65 cancers were mass (47.7%, n=31), calcification (30.8%, n=20), focal asymmetry (16.9%, n=11), architectural distortion (3.1%, n=2), and skin thickening (1.5%, n=1). The numbers of stage 0, 1, 2, 3, and 4 cancers were 13 (20.0%), 34 (52.3%), 14 (21.5%), 3 (4.6%), and 1 (1.5%), respectively. The numbers of interval and missed cancers were 5 (7.4%) and 7 (10.3%), respectively. CONCLUSION A high incidence of early breast cancer has been detected. The incidence of missed and interval cancers did not show major differences from western screening trials. We believe that this study will pioneer implementation of efficient population-based mammographic screenings in developing countries. PMID:27705880

  14. Stochastic summation of empirical Green's functions

    USGS Publications Warehouse

    Wennerberg, Leif

    1990-01-01

    Two simple strategies are presented that use random delay times for repeatedly summing the record of a relatively small earthquake to simulate the effects of a larger earthquake. The simulations do not assume any fault plane geometry or rupture dynamics, but realy only on the ω−2 spectral model of an earthquake source and elementary notions of source complexity. The strategies simulate ground motions for all frequencies within the bandwidth of the record of the event used as a summand. The first strategy, which introduces the basic ideas, is a single-stage procedure that consists of simply adding many small events with random time delays. The probability distribution for delays has the property that its amplitude spectrum is determined by the ratio of ω−2 spectra, and its phase spectrum is identically zero. A simple expression is given for the computation of this zero-phase scaling distribution. The moment rate function resulting from the single-stage simulation is quite simple and hence is probably not realistic for high-frequency (>1 Hz) ground motion of events larger than ML∼ 4.5 to 5. The second strategy is a two-stage summation that simulates source complexity with a few random subevent delays determined using the zero-phase scaling distribution, and then clusters energy around these delays to get an ω−2 spectrum for the sum. Thus, the two-stage strategy allows simulations of complex events of any size for which the ω−2 spectral model applies. Interestingly, a single-stage simulation with too few ω−2records to get a good fit to an ω−2 large-event target spectrum yields a record whose spectral asymptotes are consistent with the ω−2 model, but that includes a region in its spectrum between the corner frequencies of the larger and smaller events reasonably approximated by a power law trend. This spectral feature has also been discussed as reflecting the process of partial stress release (Brune, 1970), an asperity failure (Boatwright, 1984), or the breakdown of ω−2 scaling due to rupture significantly longer than the width of the seismogenic zone (Joyner, 1984).

  15. Drug carrier in cancer therapy: A simulation study based on magnetic carrier substances

    NASA Astrophysics Data System (ADS)

    Adam, Tijjani; Dhahi, Th S.; Mohammed, Mohammed; Hashim, U.; Noriman, N. Z.; Dahham, Omar S.

    2017-09-01

    The principle of magnetic carrier is a medium for transferring information by sending the drug to the specific part to kill tumor cells. Generally, there are seven stages of cancer. Most of the patient with cancer can only be detected when reaches stage four. At that stage, the cancer is difficult to destroy or to cure. Comparing to the nearly stage, there are probability to destroy tumor cell completely by sending the drug through magnetic carrier directly to nerve. Another way to destroyed tumor completely is by using Deoxyribonucleic acid (DNA). This project is about the simulation study based on magnetic carrier substances. The COMSOL multiphysic software is used in this project. The simulation model represents a permanent magnet, blood vessel, surrounding tissues and air in 2D. Based on result obtained, the graph shown during sending the magnetic flux is high. However, as its carry information the magnetic flux reducess from the above, the move from 0m until 0.009 m it become the lowers and start increase the flux from this until maximum at 0.018m. This is due the fact that carrier start to increase after because the low information is gradually reduce until 0.018m.

  16. Chemical Kinetics of the TPS and Base Bleeding During Flight Test

    NASA Technical Reports Server (NTRS)

    Osipov, Viatcheslav; Ponizhovskaya, Ekaterina; Hafiychuck, Halyna; Luchinsky, Dmitry; Smelyanskiy, Vadim; Dagostino, Mark; Canabal, Francisco; Mobley, Brandon L.

    2012-01-01

    The present research deals with thermal degradation of polyurethane foam (PUF) during flight test. Model of thermal decomposition was developed that accounts for polyurethane kinetics parameters extracted from thermogravimetric analyses and radial heat losses to the surrounding environment. The model predicts mass loss of foam, the temperature and kinetic of release of the exhaust gases and char as function of heat and radiation loads. When PUF is heated, urethane bond break into polyol and isocyanate. In the first stage, isocyanate pyrolyses and oxidizes. As a result, the thermo-char and oil droplets (yellow smoke) are released. In the second decomposition stage, pyrolysis and oxidization of liquid polyol occur. Next, the kinetics of chemical compound release and the information about the reactions occurring in the base area are coupled to the CFD simulations of the base flow in a single first stage motor vertically stacked vehicle configuration. The CFD simulations are performed to estimate the contribution of the hot out-gassing, chemical reactions, and char oxidation to the temperature rise of the base flow. The results of simulations are compared with the flight test data.

  17. On cell resistance and immune response time lag in a model for the HIV infection

    NASA Astrophysics Data System (ADS)

    Solovey, Guillermo; Peruani, Fernando; Ponce Dawson, Silvina; Maria Zorzenon dos Santos, Rita

    2004-11-01

    Recently, a cellular automata model has been introduced (Phys. Rev. Lett. 87 (2001) 168102) to describe the spread of the HIV infection among target cells in lymphoid tissues. The model reproduces qualitatively the entire course of the infection displaying, in particular, the two time scales that characterize its dynamics. In this work, we investigate the robustness of the model against changes in three of its parameters. Two of them are related to the resistance of the cells to get infected. The other one describes the time interval necessary to mount specific immune responses. We have observed that an increase of the cell resistance, at any stage of the infection, leads to a reduction of the latency period, i.e., of the time interval between the primary infection and the onset of AIDS. However, during the early stages of the infection, when the cell resistance increase is combined with an increase in the initial concentration of infected cells, the original behavior is recovered. Therefore we find a long and a short latency regime (eight and one year long, respectively) depending on the value of the cell resistance. We have obtained, on the other hand, that changes on the parameter that describes the immune system time lag affects the time interval during which the primary infection occurs. Using different extended versions of the model, we also discuss how the two-time scale dynamics is affected when we include inhomogeneities on the cells properties, as for instance, on the cell resistance or on the time interval to mount specific immune responses.

  18. Effect of action potential duration on Tpeak-Tend interval, T-wave area and T-wave amplitude as indices of dispersion of repolarization: Theoretical and simulation study in the rabbit heart.

    PubMed

    Arteyeva, Natalia V; Azarov, Jan E

    The aim of the study was to differentiate the effect of dispersion of repolarization (DOR) and action potential duration (APD) on T-wave parameters being considered as indices of DOR, namely, Tpeak-Tend interval, T-wave amplitude and T-wave area. T-wave was simulated in a wide physiological range of DOR and APD using a realistic rabbit model based on experimental data. A simplified mathematical formulation of T-wave formation was conducted. Both the simulations and the mathematical formulation showed that Tpeak-Tend interval and T-wave area are linearly proportional to DOR irrespectively of APD range, while T-wave amplitude is non-linearly proportional to DOR and inversely proportional to the minimal repolarization time, or minimal APD value. Tpeak-Tend interval and T-wave area are the most accurate DOR indices independent of APD. T-wave amplitude can be considered as an index of DOR when the level of APD is taken into account. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Planning and design of a knowledge based system for green manufacturing management

    NASA Astrophysics Data System (ADS)

    Kamal Mohd Nawawi, Mohd; Mohd Zuki Nik Mohamed, Nik; Shariff Adli Aminuddin, Adam

    2013-12-01

    This paper presents a conceptual design approach to the development of a hybrid Knowledge Based (KB) system for Green Manufacturing Management (GMM) at the planning and design stages. The research concentrates on the GMM by using a hybrid KB system, which is a blend of KB system and Gauging Absences of Pre-requisites (GAP). The hybrid KB/GAP system identifies all potentials elements of green manufacturing management issues throughout the development of this system. The KB system used in the planning and design stages analyses the gap between the existing and the benchmark organizations for an effective implementation through the GAP analysis technique. The proposed KBGMM model at the design stage explores two components, namely Competitive Priority and Lean Environment modules. Through the simulated results, the KBGMM System has identified, for each modules and sub-module, the problem categories in a prioritized manner. The System finalized all the Bad Points (BP) that need to be improved to achieve benchmark implementation of GMM at the design stage. The System provides valuable decision making information for the planning and design a GMM in term of business organization.

  20. Survival outcome of women with stage IV uterine carcinosarcoma who received neoadjuvant chemotherapy followed by surgery.

    PubMed

    Matsuo, Koji; Johnson, Marian S; Im, Dwight D; Ross, Malcolm S; Bush, Stephen H; Yunokawa, Mayu; Blake, Erin A; Takano, Tadao; Klobocista, Merieme M; Hasegawa, Kosei; Ueda, Yutaka; Shida, Masako; Baba, Tsukasa; Satoh, Shinya; Yokoyama, Takuhei; Machida, Hiroko; Ikeda, Yuji; Adachi, Sosuke; Miyake, Takahito M; Iwasaki, Keita; Yanai, Shiori; Takeuchi, Satoshi; Nishimura, Masato; Nagano, Tadayoshi; Takekuma, Munetaka; Shahzad, Mian M K; Pejovic, Tanja; Omatsu, Kohei; Kelley, Joseph L; Ueland, Frederick R; Roman, Lynda D

    2018-03-01

    To examine survival of women with stage IV uterine carcinosarcoma (UCS) who received neoadjuvant chemotherapy followed by hysterectomy. This is a nested case-control study within a retrospective cohort of 1192 UCS cases. Women who received neoadjuvant chemotherapy followed by hysterectomy based-surgery for stage IV UCS (n = 26) were compared to those who had primary hysterectomy-based surgery without neoadjuvant chemotherapy for stage IV UCS (n = 120). Progression-free survival (PFS) and cause-specific survival (CSS) were examined. The most common regimen for neoadjuvant chemotherapy was carboplatin/paclitaxel (53.8%). Median number of neoadjuvant chemotherapy cycles was 4. PFS was similar between the neoadjuvant chemotherapy group and the primary surgery group (unadjusted-hazard ratio [HR] 1.19, 95% confidence interval [CI] 0.75-1.89, P = 0.45). Similarly, CSS was comparable between the two groups (unadjusted-HR 1.13, 95%CI 0.68-1.90, P = 0.64). When the types of neoadjuvant chemotherapy regimens were compared, women who received a carboplatin/paclitaxel regimen had better survival outcomes compared to those who received other regimens: PFS, unadjusted-HR 0.38, 95%CI 0.15-0.93, P = 0.027; and CSS, unadjusted-HR 0.21, 95%CI 0.07-0.61, P = 0.002. Our study found that there is no statistically significant difference in survival between women with stage IV UCS who are tolerated neoadjuvant chemotherapy and those who undergo primary surgery. © 2017 Wiley Periodicals, Inc.

  1. Evaluation of energy balance closure adjustment methods by independent evapotranspiration estimates from lysimeters and hydrological simulations

    DOE PAGES

    Mauder, Matthias; Genzel, Sandra; Fu, Jin; ...

    2017-11-10

    Here, we report non-closure of the surface energy balance is a frequently observed phenomenon of hydrometeorological field measurements, when using the eddy-covariance method, which can be ascribed to an underestimation of the turbulent fluxes. Several approaches have been proposed in order to adjust the measured fluxes for this apparent systematic error. However, there are uncertainties about partitioning of the energy balance residual between the sensible and latent heat flux and whether such a correction should be applied on 30-minute data or longer time scales. The data for this study originate from two grassland sites in southern Germany, where measurements frommore » weighable lysimeters are available as reference. The adjusted evapotranspiration rates are also compared with joint energy and water balance simulations using a physically-based distributed hydrological model. We evaluate two adjustment methods: the first one preserves the Bowen ratio and the correction factor is determined on a daily basis. The second one attributes a smaller portion of the residual energy to the latent heat flux than to the sensible heat flux for closing the energy balance for every 30-minute flux integration interval. Both methods lead to an improved agreement of the eddy-covariance based fluxes with the independent lysimeter estimates and the physically-based model simulations. The first method results in a better comparability of evapotranspiration rates, and the second method leads to a smaller overall bias. These results are similar between both sites despite considerable differences in terrain complexity and grassland management. Moreover, we found that a daily adjustment factor leads to less scatter than a complete partitioning of the residual for every half-hour time interval. Lastly, the vertical temperature gradient in the surface layer and friction velocity were identified as important predictors for a potential future parameterization of the energy balance residual.« less

  2. Evaluation of energy balance closure adjustment methods by independent evapotranspiration estimates from lysimeters and hydrological simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mauder, Matthias; Genzel, Sandra; Fu, Jin

    Here, we report non-closure of the surface energy balance is a frequently observed phenomenon of hydrometeorological field measurements, when using the eddy-covariance method, which can be ascribed to an underestimation of the turbulent fluxes. Several approaches have been proposed in order to adjust the measured fluxes for this apparent systematic error. However, there are uncertainties about partitioning of the energy balance residual between the sensible and latent heat flux and whether such a correction should be applied on 30-minute data or longer time scales. The data for this study originate from two grassland sites in southern Germany, where measurements frommore » weighable lysimeters are available as reference. The adjusted evapotranspiration rates are also compared with joint energy and water balance simulations using a physically-based distributed hydrological model. We evaluate two adjustment methods: the first one preserves the Bowen ratio and the correction factor is determined on a daily basis. The second one attributes a smaller portion of the residual energy to the latent heat flux than to the sensible heat flux for closing the energy balance for every 30-minute flux integration interval. Both methods lead to an improved agreement of the eddy-covariance based fluxes with the independent lysimeter estimates and the physically-based model simulations. The first method results in a better comparability of evapotranspiration rates, and the second method leads to a smaller overall bias. These results are similar between both sites despite considerable differences in terrain complexity and grassland management. Moreover, we found that a daily adjustment factor leads to less scatter than a complete partitioning of the residual for every half-hour time interval. Lastly, the vertical temperature gradient in the surface layer and friction velocity were identified as important predictors for a potential future parameterization of the energy balance residual.« less

  3. Modelling Coastal Cliff Recession Based on the GIM-DDD Method

    NASA Astrophysics Data System (ADS)

    Gong, Bin; Wang, Shanyong; Sloan, Scott William; Sheng, Daichao; Tang, Chun'an

    2018-04-01

    The unpredictable and instantaneous collapse behaviour of coastal rocky cliffs may cause damage that extends significantly beyond the area of failure. Gravitational movements that occur during coastal cliff recession involve two major stages: the small deformation stage and the large displacement stage. In this paper, a method of simulating the entire progressive failure process of coastal rocky cliffs is developed based on the gravity increase method (GIM), the rock failure process analysis method and the discontinuous deformation analysis method, and it is referred to as the GIM-DDD method. The small deformation stage, which includes crack initiation, propagation and coalescence processes, and the large displacement stage, which includes block translation and rotation processes during the rocky cliff collapse, are modelled using the GIM-DDD method. In addition, acoustic emissions, stress field variations, crack propagation and failure mode characteristics are further analysed to provide insights that can be used to predict, prevent and minimize potential economic losses and casualties. The calculation and analytical results are consistent with previous studies, which indicate that the developed method provides an effective and reliable approach for performing rocky cliff stability evaluations and coastal cliff recession analyses and has considerable potential for improving the safety and protection of seaside cliff areas.

  4. Monte Carlo simulation of ferroelectric domain growth

    NASA Astrophysics Data System (ADS)

    Li, B. L.; Liu, X. P.; Fang, F.; Zhu, J. L.; Liu, J.-M.

    2006-01-01

    The kinetics of two-dimensional isothermal domain growth in a quenched ferroelectric system is investigated using Monte Carlo simulation based on a realistic Ginzburg-Landau ferroelectric model with cubic-tetragonal (square-rectangle) phase transitions. The evolution of the domain pattern and domain size with annealing time is simulated, and the stability of trijunctions and tetrajunctions of domain walls is analyzed. It is found that in this much realistic model with strong dipole alignment anisotropy and long-range Coulomb interaction, the powerlaw for normal domain growth still stands applicable. Towards the late stage of domain growth, both the average domain area and reciprocal density of domain wall junctions increase linearly with time, and the one-parameter dynamic scaling of the domain growth is demonstrated.

  5. Computational Fluid Dynamics Demonstration of Rigid Bodies in Motion

    NASA Technical Reports Server (NTRS)

    Camarena, Ernesto; Vu, Bruce T.

    2011-01-01

    The Design Analysis Branch (NE-Ml) at the Kennedy Space Center has not had the ability to accurately couple Rigid Body Dynamics (RBD) and Computational Fluid Dynamics (CFD). OVERFLOW-D is a flow solver that has been developed by NASA to have the capability to analyze and simulate dynamic motions with up to six Degrees of Freedom (6-DOF). Two simulations were prepared over the course of the internship to demonstrate 6DOF motion of rigid bodies under aerodynamic loading. The geometries in the simulations were based on a conceptual Space Launch System (SLS). The first simulation that was prepared and computed was the motion of a Solid Rocket Booster (SRB) as it separates from its core stage. To reduce computational time during the development of the simulation, only half of the physical domain with respect to the symmetry plane was simulated. Then a full solution was prepared and computed. The second simulation was a model of the SLS as it departs from a launch pad under a 20 knot crosswind. This simulation was reduced to Two Dimensions (2D) to reduce both preparation and computation time. By allowing 2-DOF for translations and 1-DOF for rotation, the simulation predicted unrealistic rotation. The simulation was then constrained to only allow translations.

  6. Effect of Radiotherapy Planning Complexity on Survival of Elderly Patients With Unresected Localized Lung Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Chang H.; Bonomi, Marcelo; Cesaretti, Jamie

    2011-11-01

    Purpose: To evaluate whether complex radiotherapy (RT) planning was associated with improved outcomes in a cohort of elderly patients with unresected Stage I-II non-small-cell lung cancer (NSCLC). Methods and Materials: Using the Surveillance, Epidemiology, and End Results registry linked to Medicare claims, we identified 1998 patients aged >65 years with histologically confirmed, unresected stage I-II NSCLC. Patients were classified into an intermediate or complex RT planning group using Medicare physician codes. To address potential selection bias, we used propensity score modeling. Survival of patients who received intermediate and complex simulation was compared using Cox regression models adjusting for propensity scoresmore » and in a stratified and matched analysis according to propensity scores. Results: Overall, 25% of patients received complex RT planning. Complex RT planning was associated with better overall (hazard ratio 0.84; 95% confidence interval, 0.75-0.95) and lung cancer-specific (hazard ratio 0.81; 95% confidence interval, 0.71-0.93) survival after controlling for propensity scores. Similarly, stratified and matched analyses showed better overall and lung cancer-specific survival of patients treated with complex RT planning. Conclusions: The use of complex RT planning is associated with improved survival among elderly patients with unresected Stage I-II NSCLC. These findings should be validated in prospective randomized controlled trials.« less

  7. Flight Simulator and Training Human Factors Validation

    NASA Technical Reports Server (NTRS)

    Glaser, Scott T.; Leland, Richard

    2009-01-01

    Loss of control has been identified as the leading cause of aircraft accidents in recent years. Efforts have been made to better equip pilots to deal with these types of events, commonly referred to as upsets. A major challenge in these endeavors has been recreating the motion environments found in flight as the majority of upsets take place well beyond the normal operating envelope of large aircraft. The Environmental Tectonics Corporation has developed a simulator motion base, called GYROLAB, that is capable of recreating the sustained accelerations, or G-forces, and motions of flight. A two part research study was accomplished that coupled NASA's Generic Transport Model with a GYROLAB device. The goal of the study was to characterize physiological effects of the upset environment and to demonstrate that a sustained motion based simulator can be an effective means for upset recovery training. Two groups of 25 Air Transport Pilots participated in the study. The results showed reliable signs of pilot arousal at specific stages of similar upsets. Further validation also demonstrated that sustained motion technology was successful in improving pilot performance during recovery following an extensive training program using GYROLAB technology.

  8. Bootstrap Prediction Intervals in Non-Parametric Regression with Applications to Anomaly Detection

    NASA Technical Reports Server (NTRS)

    Kumar, Sricharan; Srivistava, Ashok N.

    2012-01-01

    Prediction intervals provide a measure of the probable interval in which the outputs of a regression model can be expected to occur. Subsequently, these prediction intervals can be used to determine if the observed output is anomalous or not, conditioned on the input. In this paper, a procedure for determining prediction intervals for outputs of nonparametric regression models using bootstrap methods is proposed. Bootstrap methods allow for a non-parametric approach to computing prediction intervals with no specific assumptions about the sampling distribution of the noise or the data. The asymptotic fidelity of the proposed prediction intervals is theoretically proved. Subsequently, the validity of the bootstrap based prediction intervals is illustrated via simulations. Finally, the bootstrap prediction intervals are applied to the problem of anomaly detection on aviation data.

  9. Factors affecting duration of the expulsive stage of parturition and piglet birth intervals in sows with uncomplicated, spontaneous farrowings.

    PubMed

    van Dijk, A J; van Rens, B T T M; van der Lende, T; Taverne, M A M

    2005-10-15

    Modern pig farming is still confronted with high perinatal piglet losses which are mainly contributed to factors associated with the progress of piglet expulsion. Therefore the aim of this study was to identify sow- and piglet factors affecting the duration of the expulsive stage of farrowing and piglet birth intervals in spontaneous farrowing sows originating from five different breeds. In total 211 litters were investigated. Breed affected duration of the expulsive stage significantly: the shortest duration was found in Large White x Meishan F2 crossbred litters and the longest duration in Dutch Landrace litters. No effect of parity on the duration of the expulsive stage was found. An increase in littersize (P<0.01), an increase in number of stillborn piglets per litter (P<0.05) and a decrease of gestation length (P<0.05, independently of littersize) all resulted in an increased duration of the expulsive stage of farrowing. A curvilinear relationship between birth interval and rank (relative position in the birth order) of the piglets was found. Besides that, piglet birth intervals increased with an increasing birth weight (P<0.001). Stillborn (P<0.01) and posteriorly presented (P<0.05) piglets were delivered after significantly longer birth intervals than liveborn and anteriorly presented piglets. The results on sow- and piglet factors affecting duration of the expulsive stage and piglet birth intervals obtained in this study contribute to an increased insight into (patho) physiological aspects of perinatal mortality in pigs.

  10. Linearity Can Account for the Similarity Among Conventional, Frequency-Doubling, and Gabor-Based Perimetric Tests in the Glaucomatous Macula

    PubMed Central

    DUL, MITCHELL W.; SWANSON, WILLIAM H.

    2006-01-01

    Purposes The purposes of this study are to compare macular perimetric sensitivities for conventional size III, frequency-doubling, and Gabor stimuli in terms of Weber contrast and to provide a theoretical interpretation of the results. Methods Twenty-two patients with glaucoma performed four perimetric tests: a conventional Swedish Interactive Threshold Algorithm (SITA) 10-2 test with Goldmann size III stimuli, two frequency-doubling tests (FDT 10-2, FDT Macula) with counterphase-modulated grating stimuli, and a laboratory-designed test with Gabor stimuli. Perimetric sensitivities were converted to the reciprocal of Weber contrast and sensitivities from different tests were compared using the Bland-Altman method. Effects of ganglion cell loss on perimetric sensitivities were then simulated with a two-stage neural model. Results The average perimetric loss was similar for all stimuli until advanced stages of ganglion cell loss, in which perimetric loss tended to be greater for size III stimuli than for frequency-doubling and Gabor stimuli. Comparison of the experimental data and model simulation suggests that, in the macula, linear relations between ganglion cell loss and perimetric sensitivity loss hold for all three stimuli. Conclusions Linear relations between perimetric loss and ganglion cell loss for all three stimuli can account for the similarity in perimetric loss until advanced stages. The results do not support the hypothesis that redundancy for frequency-doubling stimuli is lower than redundancy for size III stimuli. PMID:16840860

  11. Grounded Learning Experience: Helping Students Learn Physics through Visuo-Haptic Priming and Instruction

    NASA Astrophysics Data System (ADS)

    Huang, Shih-Chieh Douglas

    In this dissertation, I investigate the effects of a grounded learning experience on college students' mental models of physics systems. The grounded learning experience consisted of a priming stage and an instruction stage, and within each stage, one of two different types of visuo-haptic representation was applied: visuo-gestural simulation (visual modality and gestures) and visuo-haptic simulation (visual modality, gestures, and somatosensory information). A pilot study involving N = 23 college students examined how using different types of visuo-haptic representation in instruction affected people's mental model construction for physics systems. Participants' abilities to construct mental models were operationalized through their pretest-to-posttest gain scores for a basic physics system and their performance on a transfer task involving an advanced physics system. Findings from this pilot study revealed that, while both simulations significantly improved participants' mental modal construction for physics systems, visuo-haptic simulation was significantly better than visuo-gestural simulation. In addition, clinical interviews suggested that participants' mental model construction for physics systems benefited from receiving visuo-haptic simulation in a tutorial prior to the instruction stage. A dissertation study involving N = 96 college students examined how types of visuo-haptic representation in different applications support participants' mental model construction for physics systems. Participant's abilities to construct mental models were again operationalized through their pretest-to-posttest gain scores for a basic physics system and their performance on a transfer task involving an advanced physics system. Participants' physics misconceptions were also measured before and after the grounded learning experience. Findings from this dissertation study not only revealed that visuo-haptic simulation was significantly more effective in promoting mental model construction and remedying participants' physics misconceptions than visuo-gestural simulation, they also revealed that visuo-haptic simulation was more effective during the priming stage than during the instruction stage. Interestingly, the effects of visuo-haptic simulation in priming and visuo-haptic simulation in instruction on participants' pretest-to-posttest gain scores for a basic physics system appeared additive. These results suggested that visuo-haptic simulation is effective in physics learning, especially when it is used during the priming stage.

  12. Confidence intervals for distinguishing ordinal and disordinal interactions in multiple regression.

    PubMed

    Lee, Sunbok; Lei, Man-Kit; Brody, Gene H

    2015-06-01

    Distinguishing between ordinal and disordinal interaction in multiple regression is useful in testing many interesting theoretical hypotheses. Because the distinction is made based on the location of a crossover point of 2 simple regression lines, confidence intervals of the crossover point can be used to distinguish ordinal and disordinal interactions. This study examined 2 factors that need to be considered in constructing confidence intervals of the crossover point: (a) the assumption about the sampling distribution of the crossover point, and (b) the possibility of abnormally wide confidence intervals for the crossover point. A Monte Carlo simulation study was conducted to compare 6 different methods for constructing confidence intervals of the crossover point in terms of the coverage rate, the proportion of true values that fall to the left or right of the confidence intervals, and the average width of the confidence intervals. The methods include the reparameterization, delta, Fieller, basic bootstrap, percentile bootstrap, and bias-corrected accelerated bootstrap methods. The results of our Monte Carlo simulation study suggest that statistical inference using confidence intervals to distinguish ordinal and disordinal interaction requires sample sizes more than 500 to be able to provide sufficiently narrow confidence intervals to identify the location of the crossover point. (c) 2015 APA, all rights reserved).

  13. Optimize the Coverage Probability of Prediction Interval for Anomaly Detection of Sensor-Based Monitoring Series

    PubMed Central

    Liu, Datong; Peng, Yu; Peng, Xiyuan

    2018-01-01

    Effective anomaly detection of sensing data is essential for identifying potential system failures. Because they require no prior knowledge or accumulated labels, and provide uncertainty presentation, the probability prediction methods (e.g., Gaussian process regression (GPR) and relevance vector machine (RVM)) are especially adaptable to perform anomaly detection for sensing series. Generally, one key parameter of prediction models is coverage probability (CP), which controls the judging threshold of the testing sample and is generally set to a default value (e.g., 90% or 95%). There are few criteria to determine the optimal CP for anomaly detection. Therefore, this paper designs a graphic indicator of the receiver operating characteristic curve of prediction interval (ROC-PI) based on the definition of the ROC curve which can depict the trade-off between the PI width and PI coverage probability across a series of cut-off points. Furthermore, the Youden index is modified to assess the performance of different CPs, by the minimization of which the optimal CP is derived by the simulated annealing (SA) algorithm. Experiments conducted on two simulation datasets demonstrate the validity of the proposed method. Especially, an actual case study on sensing series from an on-orbit satellite illustrates its significant performance in practical application. PMID:29587372

  14. Assessment of the Effects of Various Precipitation Forcings on Flood Forecasting Potential Using WRF-Hydro Modeling

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Fang, N. Z.

    2017-12-01

    A potential flood forecast system is under development for the Upper Trinity River Basin (UTRB) in North Central of Texas using the WRF-Hydro model. The Routing Application for the Parallel Computation of Discharge (RAPID) is utilized as channel routing module to simulate streamflow. Model performance analysis was conducted based on three quantitative precipitation estimates (QPE): the North Land Data Assimilation System (NLDAS) rainfall, the Multi-Radar Multi-Sensor (MRMS) QPE and the National Centers for Environmental Prediction (NCEP) quality-controlled stage IV estimates. Prior to hydrologic simulation, QPE performance is assessed on two time scales (daily and hourly) using the Community Collaborative Rain, Hail and Snow Network (CoCoRaHS) and Hydrometeorological Automated Data System (HADS) hourly products. The calibrated WRF-Hydro model was then evaluated by comparing the simulated against the USGS observed using various QPE products. The results imply that the NCEP stage IV estimates have the best accuracy among the three QPEs on both time scales, while the NLDAS rainfall performs poorly because of its coarse spatial resolution. Furthermore, precipitation bias demonstrates pronounced impact on flood forecasting skills, as the root mean squared errors are significantly reduced by replacing NLDAS rainfall with NCEP stage IV estimates. This study also demonstrates that accurate simulated results can be achieved when initial soil moisture values are well understood in the WRF-Hydro model. Future research effort will therefore be invested on incorporating data assimilation with focus on initial states of the soil properties for UTRB.

  15. PhysiCell: An open source physics-based cell simulator for 3-D multicellular systems.

    PubMed

    Ghaffarizadeh, Ahmadreza; Heiland, Randy; Friedman, Samuel H; Mumenthaler, Shannon M; Macklin, Paul

    2018-02-01

    Many multicellular systems problems can only be understood by studying how cells move, grow, divide, interact, and die. Tissue-scale dynamics emerge from systems of many interacting cells as they respond to and influence their microenvironment. The ideal "virtual laboratory" for such multicellular systems simulates both the biochemical microenvironment (the "stage") and many mechanically and biochemically interacting cells (the "players" upon the stage). PhysiCell-physics-based multicellular simulator-is an open source agent-based simulator that provides both the stage and the players for studying many interacting cells in dynamic tissue microenvironments. It builds upon a multi-substrate biotransport solver to link cell phenotype to multiple diffusing substrates and signaling factors. It includes biologically-driven sub-models for cell cycling, apoptosis, necrosis, solid and fluid volume changes, mechanics, and motility "out of the box." The C++ code has minimal dependencies, making it simple to maintain and deploy across platforms. PhysiCell has been parallelized with OpenMP, and its performance scales linearly with the number of cells. Simulations up to 105-106 cells are feasible on quad-core desktop workstations; larger simulations are attainable on single HPC compute nodes. We demonstrate PhysiCell by simulating the impact of necrotic core biomechanics, 3-D geometry, and stochasticity on the dynamics of hanging drop tumor spheroids and ductal carcinoma in situ (DCIS) of the breast. We demonstrate stochastic motility, chemical and contact-based interaction of multiple cell types, and the extensibility of PhysiCell with examples in synthetic multicellular systems (a "cellular cargo delivery" system, with application to anti-cancer treatments), cancer heterogeneity, and cancer immunology. PhysiCell is a powerful multicellular systems simulator that will be continually improved with new capabilities and performance improvements. It also represents a significant independent code base for replicating results from other simulation platforms. The PhysiCell source code, examples, documentation, and support are available under the BSD license at http://PhysiCell.MathCancer.org and http://PhysiCell.sf.net.

  16. Software for Quantifying and Simulating Microsatellite Genotyping Error

    PubMed Central

    Johnson, Paul C.D.; Haydon, Daniel T.

    2007-01-01

    Microsatellite genetic marker data are exploited in a variety of fields, including forensics, gene mapping, kinship inference and population genetics. In all of these fields, inference can be thwarted by failure to quantify and account for data errors, and kinship inference in particular can benefit from separating errors into two distinct classes: allelic dropout and false alleles. Pedant is MS Windows software for estimating locus-specific maximum likelihood rates of these two classes of error. Estimation is based on comparison of duplicate error-prone genotypes: neither reference genotypes nor pedigree data are required. Other functions include: plotting of error rate estimates and confidence intervals; simulations for performing power analysis and for testing the robustness of error rate estimates to violation of the underlying assumptions; and estimation of expected heterozygosity, which is a required input. The program, documentation and source code are available from http://www.stats.gla.ac.uk/~paulj/pedant.html. PMID:20066126

  17. Simulation of multi-stage nonlinear bone remodeling induced by fixed partial dentures of different configurations: a comparative clinical and numerical study.

    PubMed

    Liao, Zhipeng; Yoda, Nobuhiro; Chen, Junning; Zheng, Keke; Sasaki, Keiichi; Swain, Michael V; Li, Qing

    2017-04-01

    This paper aimed to develop a clinically validated bone remodeling algorithm by integrating bone's dynamic properties in a multi-stage fashion based on a four-year clinical follow-up of implant treatment. The configurational effects of fixed partial dentures (FPDs) were explored using a multi-stage remodeling rule. Three-dimensional real-time occlusal loads during maximum voluntary clenching were measured with a piezoelectric force transducer and were incorporated into a computerized tomography-based finite element mandibular model. Virtual X-ray images were generated based on simulation and statistically correlated with clinical data using linear regressions. The strain energy density-driven remodeling parameters were regulated over the time frame considered. A linear single-stage bone remodeling algorithm, with a single set of constant remodeling parameters, was found to poorly fit with clinical data through linear regression (low [Formula: see text] and R), whereas a time-dependent multi-stage algorithm better simulated the remodeling process (high [Formula: see text] and R) against the clinical results. The three-implant-supported and distally cantilevered FPDs presented noticeable and continuous bone apposition, mainly adjacent to the cervical and apical regions. The bridged and mesially cantilevered FPDs showed bone resorption or no visible bone formation in some areas. Time-dependent variation of bone remodeling parameters is recommended to better correlate remodeling simulation with clinical follow-up. The position of FPD pontics plays a critical role in mechanobiological functionality and bone remodeling. Caution should be exercised when selecting the cantilever FPD due to the risk of overloading bone resorption.

  18. Does the covariance structure matter in longitudinal modelling for the prediction of future CD4 counts?

    PubMed

    Taylor, J M; Law, N

    1998-10-30

    We investigate the importance of the assumed covariance structure for longitudinal modelling of CD4 counts. We examine how individual predictions of future CD4 counts are affected by the covariance structure. We consider four covariance structures: one based on an integrated Ornstein-Uhlenbeck stochastic process; one based on Brownian motion, and two derived from standard linear and quadratic random-effects models. Using data from the Multicenter AIDS Cohort Study and from a simulation study, we show that there is a noticeable deterioration in the coverage rate of confidence intervals if we assume the wrong covariance. There is also a loss in efficiency. The quadratic random-effects model is found to be the best in terms of correctly calibrated prediction intervals, but is substantially less efficient than the others. Incorrectly specifying the covariance structure as linear random effects gives too narrow prediction intervals with poor coverage rates. Fitting using the model based on the integrated Ornstein-Uhlenbeck stochastic process is the preferred one of the four considered because of its efficiency and robustness properties. We also use the difference between the future predicted and observed CD4 counts to assess an appropriate transformation of CD4 counts; a fourth root, cube root and square root all appear reasonable choices.

  19. Efficiency enhancement of optimized Latin hypercube sampling strategies: Application to Monte Carlo uncertainty analysis and meta-modeling

    NASA Astrophysics Data System (ADS)

    Rajabi, Mohammad Mahdi; Ataie-Ashtiani, Behzad; Janssen, Hans

    2015-02-01

    The majority of literature regarding optimized Latin hypercube sampling (OLHS) is devoted to increasing the efficiency of these sampling strategies through the development of new algorithms based on the combination of innovative space-filling criteria and specialized optimization schemes. However, little attention has been given to the impact of the initial design that is fed into the optimization algorithm, on the efficiency of OLHS strategies. Previous studies, as well as codes developed for OLHS, have relied on one of the following two approaches for the selection of the initial design in OLHS: (1) the use of random points in the hypercube intervals (random LHS), and (2) the use of midpoints in the hypercube intervals (midpoint LHS). Both approaches have been extensively used, but no attempt has been previously made to compare the efficiency and robustness of their resulting sample designs. In this study we compare the two approaches and show that the space-filling characteristics of OLHS designs are sensitive to the initial design that is fed into the optimization algorithm. It is also illustrated that the space-filling characteristics of OLHS designs based on midpoint LHS are significantly better those based on random LHS. The two approaches are compared by incorporating their resulting sample designs in Monte Carlo simulation (MCS) for uncertainty propagation analysis, and then, by employing the sample designs in the selection of the training set for constructing non-intrusive polynomial chaos expansion (NIPCE) meta-models which subsequently replace the original full model in MCSs. The analysis is based on two case studies involving numerical simulation of density dependent flow and solute transport in porous media within the context of seawater intrusion in coastal aquifers. We show that the use of midpoint LHS as the initial design increases the efficiency and robustness of the resulting MCSs and NIPCE meta-models. The study also illustrates that this relative improvement decreases with increasing number of sample points and input parameter dimensions. Since the computational time and efforts for generating the sample designs in the two approaches are identical, the use of midpoint LHS as the initial design in OLHS is thus recommended.

  20. An LBM based model for initial stenosis development in the carotid artery

    NASA Astrophysics Data System (ADS)

    Stamou, A. C.; Buick, J. M.

    2016-05-01

    A numerical scheme is proposed to simulate the early stages of stenosis development based on the properties of blood flow in the carotid artery, computed using the lattice Boltzmann method. The model is developed on the premise, supported by evidence from the literature, that the stenosis develops in regions of low velocity and low wall shear stress. The model is based on two spatial parameters which relate to the extent to which the stenosis can grow in each development phase. Simulations of stenosis development are presented for a range of the spacial parameters to determine suitable ranges for their application. Flow fields are also presented which indicate that the stenosis is developing in a realistic manner, providing evidence that stenosis development is indeed influenced by the low shear stress, rather than occurring in such areas coincidentally.

  1. Statistical analysis of dimer formation in supersaturated metal vapor based on molecular dynamics simulation

    NASA Astrophysics Data System (ADS)

    Korenchenko, Anna E.; Vorontsov, Alexander G.; Gelchinski, Boris R.; Sannikov, Grigorii P.

    2018-04-01

    We discuss the problem of dimer formation during the homogeneous nucleation of atomic metal vapor in an inert gas environment. We simulated nucleation with molecular dynamics and carried out the statistical analysis of double- and triple-atomic collisions as the two ways of long-lived diatomic complex formation. Close pair of atoms with lifetime greater than the mean time interval between atom-atom collisions is called a long-lived diatomic complex. We found that double- and triple-atomic collisions gave approximately the same probabilities of long-lived diatomic complex formation, but internal energy of the resulted state was essentially lower in the second case. Some diatomic complexes formed in three-particle collisions are stable enough to be a critical nucleus.

  2. Development and Validation of a Mobile Device-based External Ventricular Drain Simulator.

    PubMed

    Morone, Peter J; Bekelis, Kimon; Root, Brandon K; Singer, Robert J

    2017-10-01

    Multiple external ventricular drain (EVD) simulators have been created, yet their cost, bulky size, and nonreusable components limit their accessibility to residency programs. To create and validate an animated EVD simulator that is accessible on a mobile device. We developed a mobile-based EVD simulator that is compatible with iOS (Apple Inc., Cupertino, California) and Android-based devices (Google, Mountain View, California) and can be downloaded from the Apple App and Google Play Store. Our simulator consists of a learn mode, which teaches users the procedure, and a test mode, which assesses users' procedural knowledge. Twenty-eight participants, who were divided into expert and novice categories, completed the simulator in test mode and answered a postmodule survey. This was graded using a 5-point Likert scale, with 5 representing the highest score. Using the survey results, we assessed the module's face and content validity, whereas construct validity was evaluated by comparing the expert and novice test scores. Participants rated individual survey questions pertaining to face and content validity a median score of 4 out of 5. When comparing test scores, generated by the participants completing the test mode, the experts scored higher than the novices (mean, 71.5; 95% confidence interval, 69.2 to 73.8 vs mean, 48; 95% confidence interval, 44.2 to 51.6; P < .001). We created a mobile-based EVD simulator that is inexpensive, reusable, and accessible. Our results demonstrate that this simulator is face, content, and construct valid. Copyright © 2017 by the Congress of Neurological Surgeons

  3. Impact of a Two-step Emergency Department Triage Model with START, then CTAS, on Patient Flow During a Simulated Mass-casualty Incident.

    PubMed

    Lee, James S; Franc, Jeffrey M

    2015-08-01

    A high influx of patients during a mass-casualty incident (MCI) may disrupt patient flow in an already overcrowded emergency department (ED) that is functioning beyond its operating capacity. This pilot study examined the impact of a two-step ED triage model using Simple Triage and Rapid Treatment (START) for pre-triage, followed by triage with the Canadian Triage and Acuity Scale (CTAS), on patient flow during a MCI simulation exercise. Hypothesis/Problem It was hypothesized that there would be no difference in time intervals nor patient volumes at each patient-flow milestone. Physicians and nurses participated in a computer-based tabletop disaster simulation exercise. Physicians were randomized into the intervention group using START, then CTAS, or the control group using START alone. Patient-flow milestones including time intervals and patient volumes from ED arrival to triage, ED arrival to bed assignment, ED arrival to physician assessment, and ED arrival to disposition decision were compared. Triage accuracy was compared for secondary purposes. There were no significant differences in the time interval from ED arrival to triage (mean difference 108 seconds; 95% CI, -353 to 596 seconds; P=1.0), ED arrival to bed assignment (mean difference 362 seconds; 95% CI, -1,269 to 545 seconds; P=1.0), ED arrival to physician assessment (mean difference 31 seconds; 95% CI, -1,104 to 348 seconds; P=0.92), and ED arrival to disposition decision (mean difference 175 seconds; 95% CI, -1,650 to 1,300 seconds; P=1.0) between the two groups. There were no significant differences in the volume of patients to be triaged (32% vs 34%; 95% CI for the difference -16% to 21%; P=1.0), assigned a bed (16% vs 21%; 95% CI for the difference -11% to 20%; P=1.0), assessed by a physician (20% vs 22%; 95% CI for the difference -14% to 19%; P=1.0), and with a disposition decision (20% vs 9%; 95% CI for the difference -25% to 4%; P=.34) between the two groups. The accuracy of triage was similar in both groups (57% vs 70%; 95% CI for the difference -15% to 41%; P=.46). Experienced triage nurses were able to apply CTAS effectively during a MCI simulation exercise. A two-step ED triage model using START, then CTAS, had similar patient flow and triage accuracy when compared to START alone.

  4. Re-Infection Outcomes following One- and Two-Stage Surgical Revision of Infected Hip Prosthesis: A Systematic Review and Meta-Analysis

    PubMed Central

    Kunutsor, Setor K.; Whitehouse, Michael R.; Blom, Ashley W.; Beswick, Andrew D.

    2015-01-01

    Background The two-stage revision strategy has been claimed as being the “gold standard” for treating prosthetic joint infection. The one-stage revision strategy remains an attractive alternative option; however, its effectiveness in comparison to the two-stage strategy remains uncertain. Objective To compare the effectiveness of one- and two-stage revision strategies in treating prosthetic hip infection, using re-infection as an outcome. Design Systematic review and meta-analysis. Data Sources MEDLINE, EMBASE, Web of Science, Cochrane Library, manual search of bibliographies to March 2015, and email contact with investigators. Study Selection Cohort studies (prospective or retrospective) conducted in generally unselected patients with prosthetic hip infection treated exclusively by one- or two-stage revision and with re-infection outcomes reported within two years of revision. No clinical trials were identified. Review Methods Data were extracted by two independent investigators and a consensus was reached with involvement of a third. Rates of re-infection from 38 one-stage studies (2,536 participants) and 60 two-stage studies (3,288 participants) were aggregated using random-effect models after arcsine transformation, and were grouped by study and population level characteristics. Results In one-stage studies, the rate (95% confidence intervals) of re-infection was 8.2% (6.0–10.8). The corresponding re-infection rate after two-stage revision was 7.9% (6.2–9.7). Re-infection rates remained generally similar when grouped by several study and population level characteristics. There was no strong evidence of publication bias among contributing studies. Conclusion Evidence from aggregate published data suggest similar re-infection rates after one- or two-stage revision among unselected patients. More detailed analyses under a broader range of circumstances and exploration of other sources of heterogeneity will require collaborative pooling of individual participant data. Systematic Review Registration PROSPERO 2015: CRD42015016559 PMID:26407003

  5. Pitfalls of Ovarian Ablative Magnetic Resonance-guided Radiation Therapy for Refractory Endometriosis

    PubMed Central

    Tetar, Shyama; Bohoudi, Omar; Nieboer, Theodoor; Lagerwaard, Frank

    2018-01-01

    In this case presentation, we describe the challenges of performing magnetic resonance-guided radiation therapy (MRgRT) with plan adaptation in a patient with advanced endometriosis, in whom several prior therapeutic attempts were unsuccessful and extensive pelvic irradiation was regarded as being too toxic. Treatment was delivered in two sessions, first for the seemingly only active right ovary, and at a later stage for the left ovary. Some logistical problems were encountered during the preparation of the first treatment, which were subsequently optimized for the second treatment by using transvaginal ultrasound to determine the optimum time point for simulation and delivery. Using breath-hold gated delivery and plan adaptation, radiation dose to the bowel could be minimized, resulting in good tolerance of treatment. Because of the need to simulate and deliver in a brief optimal time span for visibility of the follicles in the ovaries, a single fraction dose of 8 Gy was used in our patient. Hormonal outcome after her second treatment is still pending. In conclusion, MRgRT with plan adaptation is feasible for the occasional patient with refractory endometriosis. Simulation and delivery needs to be synchronized with the menstrual cycle, ensuring that the Graafian follicles allow the ovaries to be visible on magnetic resonance imaging (MRI). Because the ovaries are only visible on T2-weighted MRI for a very brief period of time, we suggest that it is preferable to use single fraction radiotherapy with a brief interval between simulation imaging and delivery. PMID:29750135

  6. Study of insect succession and rate of decomposition on a partially burned pig carcass in an oil palm plantation in Malaysia.

    PubMed

    Heo, Chong Chin; Mohamad, Abdullah Marwi; Ahmad, Firdaus Mohd Salleh; Jeffery, John; Kurahashi, Hiromu; Omar, Baharudin

    2008-12-01

    Insects found associated with corpse can be used as one of the indicators in estimating postmortem interval (PMI). The objective of this study was to compare the stages of decomposition and faunal succession between a partially burnt pig (Sus scrofa Linnaeus) and natural pig (as control). The burning simulated a real crime whereby the victim was burnt by murderer. Two young pigs weighed approximately 10 kg were used in this study. Both pigs died from pneumonia and immediately placed in an oil palm plantation near a pig farm in Tanjung Sepat, Selangor, Malaysia. One pig was partially burnt by 1-liter petrol while the other served as control. Both carcasses were visited twice per day for the first week and once thereafter. Adult flies and larvae on the carcasses were collected and later processed in a forensic entomology laboratory. Results showed that there was no significant difference between the rate of decomposition and sequence of faunal succession on both pig carcasses. Both carcasses were completely decomposed to remain stage after nine days. The species of flies visiting the pig carcasses consisted of blow flies (Chrysomya megacephala, Chrysomya rufifacies, Hemipyrellia ligurriens), flesh fly (Sarcophagidae.), muscid fly (Ophyra spinigera), soldier fly (Hermetia illucens), coffin fly (Phoridae) and scavenger fly (Sepsidae). The only difference noted was in the number of adult flies, whereby more flies were seen in the control carcass. Faunal succession on both pig carcasses was in the following sequence: Calliphoridae, Sarcophagidae, Muscidae, Phoridae and lastly Stratiomyidae. However, there was overlap in the appearance of members of these families. Blowflies continued to oviposit on both carcasses. Hence postmortem interval (PMI) can still be estimated from the partially burnt pig carcass.

  7. Heart rate variability (HRV) and muscular system activity (EMG) in cases of crash threat during simulated driving of a passenger car.

    PubMed

    Zużewicz, Krystyna; Roman-Liu, Danuta; Konarska, Maria; Bartuzi, Paweł; Matusiak, Krzysztof; Korczak, Dariusz; Lozia, Zbigniew; Guzek, Marek

    2013-10-01

    The aim of the study was to verify whether simultaneous responses from the muscular and circulatory system occur in the driver's body under simulated conditions of a crash threat. The study was carried out in a passenger car driving simulator. The crash was included in the driving test scenario developed in an urban setting. In the group of 22 young male subjects, two physiological signals - ECG and EMG were continuously recorded. The length of the RR interval in the ECG signal was assessed. A HRV analysis was performed in the time and frequency domains for 1-minute record segments at rest (seated position), during undisturbed driving as well as during and several minutes after the crash. For the left and right side muscles: m. trapezius (TR) and m. flexor digitorum superficialis (FDS), the EMG signal amplitude was determined. The percentage of maximal voluntary contraction (MVC) was compared during driving and during the crash. As for the ECG signal, it was found that in most of the drivers changes occurred in the parameter values reflecting HRV in the time domain. Significant changes were noted in the mean length of RR intervals (mRR). As for the EMG signal, the changes in the amplitude concerned the signal recorded from the FDS muscle. The changes in ECG and EMG were simultaneous in half of the cases. Such parameters as mRR (ECG signal) and FDS-L amplitude (EMG signal) were the responses to accident risk. Under simulated conditions, responses from the circulatory and musculoskeletal systems are not always simultaneous. The results indicate that a more complete driver's response to a crash in road traffic is obtained based on parallel recording of two physiological signals (ECG and EMG).

  8. Interval Estimation of Seismic Hazard Parameters

    NASA Astrophysics Data System (ADS)

    Orlecka-Sikora, Beata; Lasocki, Stanislaw

    2017-03-01

    The paper considers Poisson temporal occurrence of earthquakes and presents a way to integrate uncertainties of the estimates of mean activity rate and magnitude cumulative distribution function in the interval estimation of the most widely used seismic hazard functions, such as the exceedance probability and the mean return period. The proposed algorithm can be used either when the Gutenberg-Richter model of magnitude distribution is accepted or when the nonparametric estimation is in use. When the Gutenberg-Richter model of magnitude distribution is used the interval estimation of its parameters is based on the asymptotic normality of the maximum likelihood estimator. When the nonparametric kernel estimation of magnitude distribution is used, we propose the iterated bias corrected and accelerated method for interval estimation based on the smoothed bootstrap and second-order bootstrap samples. The changes resulted from the integrated approach in the interval estimation of the seismic hazard functions with respect to the approach, which neglects the uncertainty of the mean activity rate estimates have been studied using Monte Carlo simulations and two real dataset examples. The results indicate that the uncertainty of mean activity rate affects significantly the interval estimates of hazard functions only when the product of activity rate and the time period, for which the hazard is estimated, is no more than 5.0. When this product becomes greater than 5.0, the impact of the uncertainty of cumulative distribution function of magnitude dominates the impact of the uncertainty of mean activity rate in the aggregated uncertainty of the hazard functions. Following, the interval estimates with and without inclusion of the uncertainty of mean activity rate converge. The presented algorithm is generic and can be applied also to capture the propagation of uncertainty of estimates, which are parameters of a multiparameter function, onto this function.

  9. A Bayesian hierarchical model with novel prior specifications for estimating HIV testing rates.

    PubMed

    An, Qian; Kang, Jian; Song, Ruiguang; Hall, H Irene

    2016-04-30

    Human immunodeficiency virus (HIV) infection is a severe infectious disease actively spreading globally, and acquired immunodeficiency syndrome (AIDS) is an advanced stage of HIV infection. The HIV testing rate, that is, the probability that an AIDS-free HIV infected person seeks a test for HIV during a particular time interval, given no previous positive test has been obtained prior to the start of the time, is an important parameter for public health. In this paper, we propose a Bayesian hierarchical model with two levels of hierarchy to estimate the HIV testing rate using annual AIDS and AIDS-free HIV diagnoses data. At level one, we model the latent number of HIV infections for each year using a Poisson distribution with the intensity parameter representing the HIV incidence rate. At level two, the annual numbers of AIDS and AIDS-free HIV diagnosed cases and all undiagnosed cases stratified by the HIV infections at different years are modeled using a multinomial distribution with parameters including the HIV testing rate. We propose a new class of priors for the HIV incidence rate and HIV testing rate taking into account the temporal dependence of these parameters to improve the estimation accuracy. We develop an efficient posterior computation algorithm based on the adaptive rejection metropolis sampling technique. We demonstrate our model using simulation studies and the analysis of the national HIV surveillance data in the USA. Copyright © 2015 John Wiley & Sons, Ltd.

  10. Comparison of alternative flue gas dry treatment technologies in waste-to-energy processes.

    PubMed

    Dal Pozzo, Alessandro; Antonioni, Giacomo; Guglielmi, Daniele; Stramigioli, Carlo; Cozzani, Valerio

    2016-05-01

    Acid gases such as HCl and SO2 are harmful both for human health and ecosystem integrity, hence their removal is a key step of the flue gas treatment of Waste-to-Energy (WtE) plants. Methods based on the injection of dry sorbents are among the Best Available Techniques for acid gas removal. In particular, systems based on double reaction and filtration stages represent nowadays an effective technology for emission control. The aim of the present study is the simulation of a reference two-stage (2S) dry treatment system performance and its comparison to three benchmarking alternatives based on single stage sodium bicarbonate injection. A modelling procedure was applied in order to identify the optimal operating configuration of the 2S system for different reference waste compositions, and to determine the total annual cost of operation. Taking into account both operating and capital costs, the 2S system appears the most cost-effective solution for medium to high chlorine content wastes. A Monte Carlo sensitivity analysis was carried out to assess the robustness of the results. Copyright © 2016. Published by Elsevier Ltd.

  11. Evaluation of the effect of one stage versus two stage full mouth disinfection on C-reactive protein and leucocyte count in patients with chronic periodontitis.

    PubMed

    Pabolu, Chandra Mohan; Mutthineni, Ramesh Babu; Chintala, Srikanth; Naheeda; Mutthineni, Navya

    2013-07-01

    Conventional non-surgical periodontal therapy is carried out in quadrant basis with 1-2 week interval. This time lag may result in re-infection of instrumented pocket and may impair healing. Therefore, a new approach to full-mouth non-surgical therapy to be completed within two consecutive days with full-mouth disinfection has been suggested. In periodontitis, leukocyte counts and levels of C-reactive protein (CRP) are likely to be slightly elevated, indicating the presence of infection or inflammation. The aim of this study is to compare the efficacy of one stage and two stage non-surgical therapy on clinical parameters along with CRP levels and total white blood cell (TWBC) count. A total of 20 patients were selected and were divided into two groups. Group 1 received one stage full mouth dis-infection and Group 2 received two stages FMD. Plaque index, sulcus bleeding index, probing depth, clinical attachment loss, serum CRP and TWBC count were evaluated for both the groups at baseline and at 1 month post-treatment. The results were analyzed using the Student t-test. Both treatment modalities lead to a significant improvement of the clinical and hematological parameters; however comparison between the two groups showed no significant difference after 1 month. The therapeutic intervention may have a systemic effect on blood count in periodontitis patients. Though one stage FMD had limited benefits over two stages FMD, the therapy can be accomplished in a shorter duration.

  12. Evaluation of the effect of one stage versus two stage full mouth disinfection on C-reactive protein and leucocyte count in patients with chronic periodontitis

    PubMed Central

    Pabolu, Chandra Mohan; Mutthineni, Ramesh Babu; Chintala, Srikanth; Naheeda; Mutthineni, Navya

    2013-01-01

    Background: Conventional non-surgical periodontal therapy is carried out in quadrant basis with 1-2 week interval. This time lag may result in re-infection of instrumented pocket and may impair healing. Therefore, a new approach to full-mouth non-surgical therapy to be completed within two consecutive days with full-mouth disinfection has been suggested. In periodontitis, leukocyte counts and levels of C-reactive protein (CRP) are likely to be slightly elevated, indicating the presence of infection or inflammation. The aim of this study is to compare the efficacy of one stage and two stage non-surgical therapy on clinical parameters along with CRP levels and total white blood cell (TWBC) count. Materials and Methods: A total of 20 patients were selected and were divided into two groups. Group 1 received one stage full mouth dis-infection and Group 2 received two stages FMD. Plaque index, sulcus bleeding index, probing depth, clinical attachment loss, serum CRP and TWBC count were evaluated for both the groups at baseline and at 1 month post-treatment. Results: The results were analyzed using the Student t-test. Both treatment modalities lead to a significant improvement of the clinical and hematological parameters; however comparison between the two groups showed no significant difference after 1 month. Conclusion: The therapeutic intervention may have a systemic effect on blood count in periodontitis patients. Though one stage FMD had limited benefits over two stages FMD, the therapy can be accomplished in a shorter duration. PMID:24174726

  13. Fault detection for discrete-time LPV systems using interval observers

    NASA Astrophysics Data System (ADS)

    Zhang, Zhi-Hui; Yang, Guang-Hong

    2017-10-01

    This paper is concerned with the fault detection (FD) problem for discrete-time linear parameter-varying systems subject to bounded disturbances. A parameter-dependent FD interval observer is designed based on parameter-dependent Lyapunov and slack matrices. The design method is presented by translating the parameter-dependent linear matrix inequalities (LMIs) into finite ones. In contrast to the existing results based on parameter-independent and diagonal Lyapunov matrices, the derived disturbance attenuation, fault sensitivity and nonnegative conditions lead to less conservative LMI characterisations. Furthermore, without the need to design the residual evaluation functions and thresholds, the residual intervals generated by the interval observers are used directly for FD decision. Finally, simulation results are presented for showing the effectiveness and superiority of the proposed method.

  14. 3D conformal radiation therapy for palliative treatment of canine nasal tumors.

    PubMed

    Buchholz, Julia; Hagen, Regine; Leo, Chiara; Ebling, Alessia; Roos, Malgorzata; Kaser-Hotz, Barbara; Bley, Carla Rohrer

    2009-01-01

    We evaluated the response of 38 dogs treated with a coarsely fractionated, palliative radiation protocol based on CT-based 3D treatment planning. Dogs with histologically confirmed malignant nasal tumors were studied. Treatment prescriptions consisted of 3-4 x 8 Gy, 4-5 x 6 Gy, or 10 x 3 Gy fractions. Selected patient and tumor factors were evaluated for an effect on outcome. Resolution of clinical signs was reported after irradiation in all dogs. Acute toxicities were mild and short lived. Thirty-seven of 38 dogs died or were euthanized due to tumor-related disease. Overall median progression-free interval (PFI) was 10 months. Tumor stage affected response, with modified stage 1 patients having a median PFI 21.3 months vs. a median PFI of 8.5 months for modified stage 2 patients (P = 0.0006). Modified stage was the only factor significantly related to outcome. Based on these findings, a palliative radiation prescription based on computerized treatment planning may be justified in some canine nasal tumor patients.

  15. Mean estimation in highly skewed samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pederson, S P

    The problem of inference for the mean of a highly asymmetric distribution is considered. Even with large sample sizes, usual asymptotics based on normal theory give poor answers, as the right-hand tail of the distribution is often under-sampled. This paper attempts to improve performance in two ways. First, modifications of the standard confidence interval procedure are examined. Second, diagnostics are proposed to indicate whether or not inferential procedures are likely to be valid. The problems are illustrated with data simulated from an absolute value Cauchy distribution. 4 refs., 2 figs., 1 tab.

  16. Applications of asymptotic confidence intervals with continuity corrections for asymmetric comparisons in noninferiority trials.

    PubMed

    Soulakova, Julia N; Bright, Brianna C

    2013-01-01

    A large-sample problem of illustrating noninferiority of an experimental treatment over a referent treatment for binary outcomes is considered. The methods of illustrating noninferiority involve constructing the lower two-sided confidence bound for the difference between binomial proportions corresponding to the experimental and referent treatments and comparing it with the negative value of the noninferiority margin. The three considered methods, Anbar, Falk-Koch, and Reduced Falk-Koch, handle the comparison in an asymmetric way, that is, only the referent proportion out of the two, experimental and referent, is directly involved in the expression for the variance of the difference between two sample proportions. Five continuity corrections (including zero) are considered with respect to each approach. The key properties of the corresponding methods are evaluated via simulations. First, the uncorrected two-sided confidence intervals can, potentially, have smaller coverage probability than the nominal level even for moderately large sample sizes, for example, 150 per group. Next, the 15 testing methods are discussed in terms of their Type I error rate and power. In the settings with a relatively small referent proportion (about 0.4 or smaller), the Anbar approach with Yates' continuity correction is recommended for balanced designs and the Falk-Koch method with Yates' correction is recommended for unbalanced designs. For relatively moderate (about 0.6) and large (about 0.8 or greater) referent proportion, the uncorrected Reduced Falk-Koch method is recommended, although in this case, all methods tend to be over-conservative. These results are expected to be used in the design stage of a noninferiority study when asymmetric comparisons are envisioned. Copyright © 2013 John Wiley & Sons, Ltd.

  17. Diagnosis of Persistent Infection in Prosthetic Two-Stage Exchange: Evaluation of the Effect of Sonication on Antibiotic Release from Bone Cement Spacers.

    PubMed

    Mariaux, Sandrine; Furustrand Tafin, Ulrika; Borens, Olivier

    2018-01-01

    Introduction : When treating periprosthetic joint infection with a two-stage procedure, antibiotic-impregnated spacers can be used in the interval between prosthetic removal and reimplantation. In our experience, cultures of sonicated spacers are most often negative. The objective of the study was to assess whether that sonication causes an elution of antibiotics, leading to elevated antibiotic concentrations in the sonication fluid inhibiting bacterial growth and thus causing false-negative cultures. Methods : A prospective monocentric study was performed from September 2014 to March 2016. Inclusion criteria were a two-stage procedure for prosthetic infection and agreement of the patient to participate in the study. Spacers were made of gentamicin-containing cement to which tobramycin and vancomycin were added. Antibiotic concentrations in the sonication fluid were determined by mass-spectometry (LC-MS). Results : 30 patients were identified (15 hip and 14 knee and 1 ankle arthroplasties). No cases of culture positive sonicated spacer fluid were observed in our serie. In the sonication fluid median concentrations of 13.2µg/ml, 392 µg/ml and 16.6 µg/ml were detected for vancomycin, tobramycin and gentamicin, respectively. According to the European Committee on antimicrobial susceptibility testing (EUCAST), these concentrations released from cement spacer during sonication are higher than the minimal inhibitory concentrations (MICs) for most bacteria relevant in prosthetic joint infections. Conclusion: Spacer sonication cultures remained sterile in all of our cases. Elevated concentrations of antibiotics released during sonication could explain partly negative-cultured sonicated spacers. Indeed, the absence of antibiotic free interval during the two-stages can also contribute to false-negative spacers sonicated cultures.

  18. Study on launch scheme of space-net capturing system.

    PubMed

    Gao, Qingyu; Zhang, Qingbin; Feng, Zhiwei; Tang, Qiangang

    2017-01-01

    With the continuous progress in active debris-removal technology, scientists are increasingly concerned about the concept of space-net capturing system. The space-net capturing system is a long-range-launch flexible capture system, which has great potential to capture non-cooperative targets such as inactive satellites and upper stages. In this work, the launch scheme is studied by experiment and simulation, including two-step ejection and multi-point-traction analyses. The numerical model of the tether/net is based on finite element method and is verified by full-scale ground experiment. The results of the ground experiment and numerical simulation show that the two-step ejection and six-point traction scheme of the space-net system is superior to the traditional one-step ejection and four-point traction launch scheme.

  19. Study on launch scheme of space-net capturing system

    PubMed Central

    Zhang, Qingbin; Feng, Zhiwei; Tang, Qiangang

    2017-01-01

    With the continuous progress in active debris-removal technology, scientists are increasingly concerned about the concept of space-net capturing system. The space-net capturing system is a long-range-launch flexible capture system, which has great potential to capture non-cooperative targets such as inactive satellites and upper stages. In this work, the launch scheme is studied by experiment and simulation, including two-step ejection and multi-point-traction analyses. The numerical model of the tether/net is based on finite element method and is verified by full-scale ground experiment. The results of the ground experiment and numerical simulation show that the two-step ejection and six-point traction scheme of the space-net system is superior to the traditional one-step ejection and four-point traction launch scheme. PMID:28877187

  20. [Comparative effects of nebivolol and valsartan on atrial electromechanical coupling in newly diagnosed stage 1 hypertensive patients].

    PubMed

    Altun, Burak; Acar, Gürkan; Akçay, Ahmet; Sökmen, Abdullah; Kaya, Hakan; Köroğlu, Sedat

    2011-10-01

    Hypertension is an important cardiovascular risk factor for the development of atrial fibrillation (AF). Increased atrial electromechanical coupling time interval measured by tissue Doppler is accepted as an important factor for prediction of AF development in hypertensive patients. The aim of this study was to compare the effects of valsartan, an angiotensin receptor blocker, and nebivolol, a beta-blocker, on atrial electromechanical coupling in newly diagnosed stage 1 hypertensive patients. The study included 60 newly diagnosed stage 1 hypertensive patients with no other systemic disease. The patients were randomized to receive nebivolol 5 mg (30 patients; 21 women, 9 men; mean age 48.4 ± 11.4 years) and valsartan 160 mg (30 patients; 21 women, 9 men; mean age 49.8 ± 11.3 years). All the patients underwent tissue Doppler echocardiographic examination before and three months after treatment to compare the effects of the two drugs on atrial electromechanical coupling. Baseline blood pressures, electrocardiographic and echocardiographic findings, and atrial electromechanical coupling were similar in both groups (p>0.05). Both drugs significantly reduced blood pressure after treatment, with similar efficacy (p>0.05). Atrial electromechanical coupling time intervals showed significant decreases in both groups. Prolonged interatrial electromechanical time intervals in hypertensives are improved with antihypertensive treatment.

  1. Adaptive Detection and ISI Mitigation for Mobile Molecular Communication.

    PubMed

    Chang, Ge; Lin, Lin; Yan, Hao

    2018-03-01

    Current studies on modulation and detection schemes in molecular communication mainly focus on the scenarios with static transmitters and receivers. However, mobile molecular communication is needed in many envisioned applications, such as target tracking and drug delivery. Until now, investigations about mobile molecular communication have been limited. In this paper, a static transmitter and a mobile bacterium-based receiver performing random walk are considered. In this mobile scenario, the channel impulse response changes due to the dynamic change of the distance between the transmitter and the receiver. Detection schemes based on fixed distance fail in signal detection in such a scenario. Furthermore, the intersymbol interference (ISI) effect becomes more complex due to the dynamic character of the signal which makes the estimation and mitigation of the ISI even more difficult. In this paper, an adaptive ISI mitigation method and two adaptive detection schemes are proposed for this mobile scenario. In the proposed scheme, adaptive ISI mitigation, estimation of dynamic distance, and the corresponding impulse response reconstruction are performed in each symbol interval. Based on the dynamic channel impulse response in each interval, two adaptive detection schemes, concentration-based adaptive threshold detection and peak-time-based adaptive detection, are proposed for signal detection. Simulations demonstrate that the ISI effect is significantly reduced and the adaptive detection schemes are reliable and robust for mobile molecular communication.

  2. In Silico Modeling Approach for the Evaluation of Gastrointestinal Dissolution, Supersaturation, and Precipitation of Posaconazole.

    PubMed

    Hens, Bart; Pathak, Shriram M; Mitra, Amitava; Patel, Nikunjkumar; Liu, Bo; Patel, Sanjaykumar; Jamei, Masoud; Brouwers, Joachim; Augustijns, Patrick; Turner, David B

    2017-12-04

    The aim of this study was to evaluate gastrointestinal (GI) dissolution, supersaturation, and precipitation of posaconazole, formulated as an acidified (pH 1.6) and neutral (pH 7.1) suspension. A physiologically based pharmacokinetic (PBPK) modeling and simulation tool was applied to simulate GI and systemic concentration-time profiles of posaconazole, which were directly compared with intraluminal and systemic data measured in humans. The Advanced Dissolution Absorption and Metabolism (ADAM) model of the Simcyp Simulator correctly simulated incomplete gastric dissolution and saturated duodenal concentrations of posaconazole in the duodenal fluids following administration of the neutral suspension. In contrast, gastric dissolution was approximately 2-fold higher after administration of the acidified suspension, which resulted in supersaturated concentrations of posaconazole upon transfer to the upper small intestine. The precipitation kinetics of posaconazole were described by two precipitation rate constants, extracted by semimechanistic modeling of a two-stage medium change in vitro dissolution test. The 2-fold difference in exposure in the duodenal compartment for the two formulations corresponded with a 2-fold difference in systemic exposure. This study demonstrated for the first time predictive in silico simulations of GI dissolution, supersaturation, and precipitation for a weakly basic compound in part informed by modeling of in vitro dissolution experiments and validated via clinical measurements in both GI fluids and plasma. Sensitivity analysis with the PBPK model indicated that the critical supersaturation ratio (CSR) and second precipitation rate constant (sPRC) are important parameters of the model. Due to the limitations of the two-stage medium change experiment the CSR was extracted directly from the clinical data. However, in vitro experiments with the BioGIT transfer system performed after completion of the in silico modeling provided an almost identical CSR to the clinical study value; this had no significant impact on the PBPK model predictions.

  3. Time between the first and second operations for staged total knee arthroplasties when the interval is determined by the patient.

    PubMed

    Ishii, Yoshinori; Noguchi, Hideo; Takeda, Mitsuhiro; Sato, Junko; Toyabe, Shin-Ichi

    2014-01-01

    The purpose of this study was to evaluate the interval between the first and second operations for staged total knee arthroplasties (TKAs) in patients with bilateral knee osteoarthritis. Depending on satisfactory preoperative health status, the patients determined the timing of the second operation. We also analysed correlations between the interval and patient characteristics. Eighty-six patients with bilateral knee osteoarthritis were analysed. The mean follow-up time from the first TKA was 96 months. The side of the first TKA was chosen by the patients. The timing of the second TKA was determined by the patients, depending on their perceived ability to tolerate the additional pain and limitations to activities of daily living. The median interval between the first and second operations was 12.5 months, with a range of 2 to 113 months. In 43 (50%) patients, the interval was <12 months. There was no difference in the interval between females and males (p=0.861), and no correlation between the interval and body mass index or age. There was weak correlation between the year of the first TKA and the interval (R=-0.251, p=0.020), with the interval getting significantly shorter as the years progressed (p=0.032). The median interval between the first and second operations in patients who underwent staged TKAs for bilateral knee osteoarthritis was about 1 year. The results of the current study may help patients and physicians to plan effective treatment strategies for staged TKAs. Level II. Copyright © 2013 Elsevier B.V. All rights reserved.

  4. Behavioral Assessment of Hearing in 2 to 4 Year-old Children: A Two-interval, Observer-based Procedure Using Conditioned Play-based Responses.

    PubMed

    Bonino, Angela Yarnell; Leibold, Lori J

    2017-01-23

    Collecting reliable behavioral data from toddlers and preschoolers is challenging. As a result, there are significant gaps in our understanding of human auditory development for these age groups. This paper describes an observer-based procedure for measuring hearing sensitivity with a two-interval, two-alternative forced-choice paradigm. Young children are trained to perform a play-based, motor response (e.g., putting a block in a bucket) whenever they hear a target signal. An experimenter observes the child's behavior and makes a judgment about whether the signal was presented during the first or second observation interval; the experimenter is blinded to the true signal interval, so this judgment is based solely on the child's behavior. These procedures were used to test 2 to 4 year-olds (n = 33) with no known hearing problems. The signal was a 1,000 Hz warble tone presented in quiet, and the signal level was adjusted to estimate a threshold corresponding to 71%-correct detection. A valid threshold was obtained for 82% of children. These results indicate that the two-interval procedure is both feasible and reliable for use with toddlers and preschoolers. The two-interval, observer-based procedure described in this paper is a powerful tool for evaluating hearing in young children because it guards against response bias on the part of the experimenter.

  5. The Use of Simulation Models in Teaching Geomorphology and Hydrology.

    ERIC Educational Resources Information Center

    Kirkby, Mike; Naden, Pam

    1988-01-01

    Learning about the physical environment from computer simulation models is discussed in terms of three stages: exploration, experimentation, and calibration. Discusses the effective use of models and presents two computer simulations written in BBC BASIC, STORFLO (for catchment hydrology) and SLOPEK (for hillslope evolution). (Author/GEA)

  6. Forensic age estimation based on development of third molars: a staging technique for magnetic resonance imaging.

    PubMed

    De Tobel, J; Phlypo, I; Fieuws, S; Politis, C; Verstraete, K L; Thevissen, P W

    2017-12-01

    The development of third molars can be evaluated with medical imaging to estimate age in subadults. The appearance of third molars on magnetic resonance imaging (MRI) differs greatly from that on radiographs. Therefore a specific staging technique is necessary to classify third molar development on MRI and to apply it for age estimation. To develop a specific staging technique to register third molar development on MRI and to evaluate its performance for age estimation in subadults. Using 3T MRI in three planes, all third molars were evaluated in 309 healthy Caucasian participants from 14 to 26 years old. According to the appearance of the developing third molars on MRI, descriptive criteria and schematic representations were established to define a specific staging technique. Two observers, with different levels of experience, staged all third molars independently with the developed technique. Intra- and inter-observer agreement were calculated. The data were imported in a Bayesian model for age estimation as described by Fieuws et al. (2016). This approach adequately handles correlation between age indicators and missing age indicators. It was used to calculate a point estimate and a prediction interval of the estimated age. Observed age minus predicted age was calculated, reflecting the error of the estimate. One-hundred and sixty-six third molars were agenetic. Five percent (51/1096) of upper third molars and 7% (70/1044) of lower third molars were not assessable. Kappa for inter-observer agreement ranged from 0.76 to 0.80. For intra-observer agreement kappa ranged from 0.80 to 0.89. However, two stage differences between observers or between staging sessions occurred in up to 2.2% (20/899) of assessments, probably due to a learning effect. Using the Bayesian model for age estimation, a mean absolute error of 2.0 years in females and 1.7 years in males was obtained. Root mean squared error equalled 2.38 years and 2.06 years respectively. The performance to discern minors from adults was better for males than for females, with specificities of 96% and 73% respectively. Age estimations based on the proposed staging method for third molars on MRI showed comparable reproducibility and performance as the established methods based on radiographs.

  7. Development and application of the microbial fate and transport module for the Agricultural Policy/Environmental eXtender (APEX) model

    NASA Astrophysics Data System (ADS)

    Hong, E.; Park, Y.; Muirhead, R.; Jeong, J.; Pachepsky, Y. A.

    2017-12-01

    Pathogenic microorganisms in recreational and irrigation waters remain the subject of concern. Water quality models are used to estimate microbial quality of water sources, to evaluate microbial contamination-related risks, to guide the microbial water quality monitoring, and to evaluate the effect of agricultural management on the microbial water quality. The Agricultural Policy/Environmental eXtender (APEX) is the watershed-scale water quality model that includes highly detailed representation of agricultural management. The APEX currently does not have microbial fate and transport simulation capabilities. The objective of this work was to develop the first APEX microbial fate and transport module that could use the APEX conceptual model of manure removal together with recently introduced conceptualizations of the in-stream microbial fate and transport. The module utilizes manure erosion rates found in the APEX. Bacteria survival in soil-manure mixing layer was simulated with the two-stage survival model. Individual survival patterns were simulated for each manure application date. Simulated in-stream microbial fate and transport processes included the reach-scale passive release of bacteria with resuspended bottom sediment during high flow events, the transport of bacteria from bottom sediment due to the hyporheic exchange during low flow periods, the deposition with settling sediment, and the two-stage survival. Default parameter values were available from recently published databases. The APEX model with the newly developed microbial fate and transport module was applied to simulate seven years of monitoring data for the Toenepi watershed in New Zealand. Based on calibration and testing results, the APEX with the microbe module reproduced well the monitored pattern of E. coli concentrations at the watershed outlet. The APEX with the microbial fate and transport module will be utilized for predicting microbial quality of water under various agricultural practices, evaluating monitoring protocols, and supporting the selection of management practices based on regulations that rely on fecal indicator bacteria concentrations.

  8. GLOBAL HIGH-RESOLUTION N-BODY SIMULATION OF PLANET FORMATION. I. PLANETESIMAL-DRIVEN MIGRATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kominami, J. D.; Daisaka, H.; Makino, J.

    2016-03-01

    We investigated whether outward planetesimal-driven migration (PDM) takes place or not in simulations when the self-gravity of planetesimals is included. We performed N-body simulations of planetesimal disks with a large width (0.7–4 au) that ranges over the ice line. The simulations consisted of two stages. The first-stage simulations were carried out to see the runaway growth phase using the planetesimals of initially the same mass. The runaway growth took place both at the inner edge of the disk and at the region just outside the ice line. This result was utilized for the initial setup of the second-stage simulations, in which themore » runaway bodies just outside the ice line were replaced by the protoplanets with about the isolation mass. In the second-stage simulations, the outward migration of the protoplanet was followed by the stopping of the migration due to the increase of the random velocity of the planetesimals. Owing to this increase of random velocities, one of the PDM criteria derived in Minton and Levison was broken. In the current simulations, the effect of the gas disk is not considered. It is likely that the gas disk plays an important role in PDM, and we plan to study its effect in future papers.« less

  9. A method for extending stage-discharge relationships using a hydrodynamic model and quantifying the associated uncertainty

    NASA Astrophysics Data System (ADS)

    Shao, Quanxi; Dutta, Dushmanta; Karim, Fazlul; Petheram, Cuan

    2018-01-01

    Streamflow discharge is a fundamental dataset required to effectively manage water and land resources. However, developing robust stage - discharge relationships called rating curves, from which streamflow discharge is derived, is time consuming and costly, particularly in remote areas and especially at high stage levels. As a result stage - discharge relationships are often heavily extrapolated. Hydrodynamic (HD) models are physically based models used to simulate the flow of water along river channels and over adjacent floodplains. In this paper we demonstrate a method by which a HD model can be used to generate a 'synthetic' stage - discharge relationship at high stages. The method uses a both-side Box-Cox transformation to calibrate the synthetic rating curve such that the regression residuals are as close to the normal distribution as possible. By doing this both-side transformation, the statistical uncertainty in the synthetically derived stage - discharge relationship can be calculated. This enables people trying to make decisions to determine whether the uncertainty in the synthetically generated rating curve at high stage levels is acceptable for their decision. The proposed method is demonstrated in two streamflow gauging stations in north Queensland, Australia.

  10. Character Decomposition and Transposition Processes of Chinese Compound Words in Rapid Serial Visual Presentation.

    PubMed

    Cao, Hong-Wen; Yang, Ke-Yu; Yan, Hong-Mei

    2017-01-01

    Character order information is encoded at the initial stage of Chinese word processing, however, its time course remains underspecified. In this study, we assess the exact time course of the character decomposition and transposition processes of two-character Chinese compound words (canonical, transposed, or reversible words) compared with pseudowords using dual-target rapid serial visual presentation (RSVP) of stimuli appearing at 30 ms per character with no inter-stimulus interval. The results indicate that Chinese readers can identify words with character transpositions in rapid succession; however, a transposition cost is involved in identifying transposed words compared to canonical words. In RSVP reading, character order of words is more likely to be reversed during the period from 30 to 180 ms for canonical and reversible words, but the period from 30 to 240 ms for transposed words. Taken together, the findings demonstrate that the holistic representation of the base word is activated, however, the order of the two constituent characters is not strictly processed during the very early stage of visual word processing.

  11. The Yudomian of Siberia, Vendian and Ediacaran systems of the International stratigraphic scale

    NASA Astrophysics Data System (ADS)

    Khomentovsky, V. V.

    2008-12-01

    In Russia, the terminal Neoproterozoic formally includes the Vendian of western part of the East European platform and the concurrent Yudoma Group of Siberia. As is shown in this work, the designated subdivisions correspond in the stratotypes only to the upper, Yudomian Series of the Vendian. In the Siberian platform, the Ust-Yudoma and Aim horizons of the Yudomian are tightly interrelated. The lower of them, bearing remains of Ediacaran Fauna, represents the Ediacarian Stage, whereas the upper one containing small-shelled fossils (SSF) corresponds to the Nemakit-Daldynian Stage divided into the trisulcatus and antiqua superregional zones. In more complete sections of the platform periphery, sediments of these subdivisions conformably rest on siliciclastic succession that should be ranked as basal subdivision of the Yudomian. The succession is concurrent to the Laplandian Stage of the East European platform. According to geochronological dates obtained recently, the Yudomian Series spans interval of 600-540 Ma. In the East European platform, the Upper Vendian (Yudomian) begins with the Laplandian basal tillites of synonymous stage. In the west of the platform, tillites are dated at 600 Ma like the Upper Vendian base in Siberia. The next Ediacarian Stage of the East European platform is stratigraphic equivalent of the Redkino Horizon, while summary range of the Kotlin and Rovno horizons is concurrent to that of the Nemakit-Daldynian Stage. The Vendian of Russia is conformably overlain by the Tommotian Stage of the Lower Cambrian. Intense pre-Vendian events constrained distribution areas of the Lower Vendian sediments in Russia. The Lower Vendian deposits of the East European platform are most representative and well studied in the central Urals, where they are attributed to the Serebryanka Group. In Siberia, separate subdivisions representing the Lower Vendian are the Maastakh Formation of the Olenek Uplift, two lower members of the Ushakovka Formation in the Baikal region, and the Taseeva Group of the Yenisei Range. Chronological interval of the Lower Vendian corresponds to 650-600 Ma. The Marinoan Glaciation dated in Australia at 650-635 Ma is concurrent to basal part of the pre-Yudomian interval of the Vendian in Russia, whereas the Laplandian Tillite and Gaskiers Glaciation (600-580 Ma) correspond to onset of the Yudomian Epoch. The new Ediacaran System (Knoll et al., 2004) legalized in the International Neoproterozoic scale is close in range to the entire Vendian (635-544 Ma), although without basal beds (Marinoan Tillite) it deprives the terminal Neoproterozoic of its original sense. Inferiority of the system consists also in its indivisibility into stages. Hence, it is clear that the Vendian System subdivided in detail in Russia should be retained in the rank of terminal system of the Precambrian, one of the basic in general scale of the Neoproterozoic.

  12. The "Interval Walking in Colorectal Cancer" (I-WALK-CRC) study: Design, methods and recruitment results of a randomized controlled feasibility trial.

    PubMed

    Banck-Petersen, Anna; Olsen, Cecilie K; Djurhuus, Sissal S; Herrstedt, Anita; Thorsen-Streit, Sarah; Ried-Larsen, Mathias; Østerlind, Kell; Osterkamp, Jens; Krarup, Peter-Martin; Vistisen, Kirsten; Mosgaard, Camilla S; Pedersen, Bente K; Højman, Pernille; Christensen, Jesper F

    2018-03-01

    Low physical activity level is associated with poor prognosis in patients with colorectal cancer (CRC). To increase physical activity, technology-based platforms are emerging and provide intriguing opportunities to prescribe and monitor active lifestyle interventions. The "Interval Walking in Colorectal Cancer"(I-WALK-CRC) study explores the feasibility and efficacy a home-based interval-walking intervention delivered by a smart-phone application in order to improve cardio-metabolic health profile among CRC survivors. The aim of the present report is to describe the design, methods and recruitment results of the I-WALK-CRC study.Methods/Results: The I-WALK-CRC study is a randomized controlled trial designed to evaluate the feasibility and efficacy of a home-based interval walking intervention compared to a waiting-list control group for physiological and patient-reported outcomes. Patients who had completed surgery for local stage disease and patients who had completed surgery and any adjuvant chemotherapy for locally advanced stage disease were eligible for inclusion. Between October 1st , 2015, and February 1st , 2017, 136 inquiries were recorded; 83 patients were eligible for enrollment, and 42 patients accepted participation. Age and employment status were associated with participation, as participants were significantly younger (60.5 vs 70.8 years, P < 0.001) and more likely to be working (OR 5.04; 95%CI 1.96-12.98, P < 0.001) than non-participants. In the present study, recruitment of CRC survivors was feasible but we aim to better the recruitment rate in future studies. Further, the study clearly favored younger participants. The I-WALK-CRC study will provide important information regarding feasibility and efficacy of a home-based walking exercise program in CRC survivors.

  13. Flood-inundation maps for the Peckman River in the Townships of Verona, Cedar Grove, and Little Falls, and the Borough of Woodland Park, New Jersey, 2014

    USGS Publications Warehouse

    Niemoczynski, Michal J.; Watson, Kara M.

    2016-10-19

    Digital flood-inundation maps for an approximate 7.5-mile reach of the Peckman River in New Jersey, which extends from Verona Lake Dam in the Township of Verona downstream through the Township of Cedar Grove and the Township of Little Falls to the confluence with the Passaic River in the Borough of Woodland Park, were created by the U.S. Geological Survey (USGS) in cooperation with the New Jersey Department of Environmental Protection. The flood-inundation maps, which can be accessed through the USGS Flood Inundation Mapping Science Web site at http://water.usgs.gov/osw/flood_inundation/ depict estimates of the probable areal extent and depth of flooding corresponding to selected water levels (stages) at the USGS streamgage on the Peckman River at Ozone Avenue at Verona, New Jersey (station number 01389534). Near-real-time stages at this streamgage may be obtained on the Internet from the USGS National Water Information System at http://waterdata.usgs.gov/.Flood profiles were simulated for the stream reach by means of a one-dimensional step-backwater model. The model was calibrated using the most current stage-discharge relations at USGS streamgages on the Peckman River at Ozone Avenue at Verona, New Jersey (station number 01389534) and the Peckman River at Little Falls, New Jersey (station number 01389550). The hydraulic model was then used to compute eight water-surface profiles for flood stages at 0.5-foot (ft) intervals ranging from 3.0 ft or near bankfull to 6.5 ft, which is approximately the highest recorded water level during the period of record (1979–2014) at USGS streamgage 01389534, Peckman River at Ozone Avenue at Verona, New Jersey. The simulated water-surface profiles were then combined with a geographic information system digital elevation model derived from light detection and ranging (lidar) data to delineate the area flooded at each water level.The availability of these maps along with Internet information regarding current stage from the USGS streamgage provides emergency management personnel and residents with information, such as estimates of inundation extents, based on water stage, that is critical for flood response activities such as evacuations and road closures, as well as for post-flood recovery efforts.

  14. The effectiveness and safety of platinum-based pemetrexed and platinum-based gemcitabine treatment in patients with malignant pleural mesothelioma.

    PubMed

    Ak, Guntulu; Metintas, Selma; Akarsu, Muhittin; Metintas, Muzaffer

    2015-07-09

    We aimed to evaluate the efficiency and safety of cis/carboplatin plus gemcitabine, which was previously used for mesothelioma but with no recorded proof of its efficiency, compared with cis/carboplatin plus pemetrexed, which is known to be effective in mesothelioma, in comparable historical groups of malignant pleural mesothelioma. One hundred and sixteen patients received cis/carboplatin plus pemetrexed (group 1), while 30 patients received cis/carboplatin plus gemcitabine (group 2) between June 1999 and June 2012. The two groups were compared in terms of median survival and adverse events to chemotherapy. The mean ages of groups 1 and 2 were 60.7 and 60.8 years, respectively. Most of the patients (78.1%) had epithelial type tumors, and 47% of the patients had stage IV disease. There was no difference between the two groups in terms of age, gender, asbestos exposure, histology, stage, Karnofsky performance status, presence of pleurodesis, prophylactic radiotherapy, second-line chemotherapy and median hemoglobin and serum albumin levels. The median survival time from diagnosis to death or the last day of follow up with a 95% confidence interval was 12 ± 0.95 months (95% CI: 10.15-13.85) for group 1 and 11.0 ± 1.09 months (95% CI: 8.85-13.15) for group 2 (Log-Rank: 0.142; p = 0.706). The median survival time from treatment to death or the last day of follow-up with a 95% confidence interval was 11.0 ± 0.99 months (95% CI: 9.06-12.94) for group 1 and 11.0 ± 1.52 months (95% CI: 8.02-13.97) for group 2 (Log-Rank: 0.584; p = 0.445). The stage and Karnofsky performance status were found to be significant variables on median survival time by univariate analysis. After adjusting for the stage and Karnofsky performance status, the chemotherapy schema was not impressive on median survival time (OR: 0.837; 95% CI: 0.548-1.277; p = 0.409). The progression free survival was 7.0 ± 0.61 months for group I and 6.0 ± 1.56 months for group II (Log-Rank: 0.522; p = 0.470). The treatment was generally well tolerated, and the side effects were similar in both groups. The study indicates that platinum-based gemcitabine is effective and a safe schema in malignant pleural mesothelioma. Further research should include large randomized phase III trials comparing these agents.

  15. Event- and interval-based measurement of stuttering: a review.

    PubMed

    Valente, Ana Rita S; Jesus, Luis M T; Hall, Andreia; Leahy, Margaret

    2015-01-01

    Event- and interval-based measurements are two different ways of computing frequency of stuttering. Interval-based methodology emerged as an alternative measure to overcome problems associated with reproducibility in the event-based methodology. No review has been made to study the effect of methodological factors in interval-based absolute reliability data or to compute the agreement between the two methodologies in terms of inter-judge, intra-judge and accuracy (i.e., correspondence between raters' scores and an established criterion). To provide a review related to reproducibility of event-based and time-interval measurement, and to verify the effect of methodological factors (training, experience, interval duration, sample presentation order and judgment conditions) on agreement of time-interval measurement; in addition, to determine if it is possible to quantify the agreement between the two methodologies The first two authors searched for articles on ERIC, MEDLINE, PubMed, B-on, CENTRAL and Dissertation Abstracts during January-February 2013 and retrieved 495 articles. Forty-eight articles were selected for review. Content tables were constructed with the main findings. Articles related to event-based measurements revealed values of inter- and intra-judge greater than 0.70 and agreement percentages beyond 80%. The articles related to time-interval measures revealed that, in general, judges with more experience with stuttering presented significantly higher levels of intra- and inter-judge agreement. Inter- and intra-judge values were beyond the references for high reproducibility values for both methodologies. Accuracy (regarding the closeness of raters' judgements with an established criterion), intra- and inter-judge agreement were higher for trained groups when compared with non-trained groups. Sample presentation order and audio/video conditions did not result in differences in inter- or intra-judge results. A duration of 5 s for an interval appears to be an acceptable agreement. Explanation for high reproducibility values as well as parameter choice to report those data are discussed. Both interval- and event-based methodologies used trained or experienced judges for inter- and intra-judge determination and data were beyond the references for good reproducibility values. Inter- and intra-judge values were reported in different metric scales among event- and interval-based methods studies, making it unfeasible to quantify the agreement between the two methods. © 2014 Royal College of Speech and Language Therapists.

  16. Linkage disequilibrium interval mapping of quantitative trait loci.

    PubMed

    Boitard, Simon; Abdallah, Jihad; de Rochambeau, Hubert; Cierco-Ayrolles, Christine; Mangin, Brigitte

    2006-03-16

    For many years gene mapping studies have been performed through linkage analyses based on pedigree data. Recently, linkage disequilibrium methods based on unrelated individuals have been advocated as powerful tools to refine estimates of gene location. Many strategies have been proposed to deal with simply inherited disease traits. However, locating quantitative trait loci is statistically more challenging and considerable research is needed to provide robust and computationally efficient methods. Under a three-locus Wright-Fisher model, we derived approximate expressions for the expected haplotype frequencies in a population. We considered haplotypes comprising one trait locus and two flanking markers. Using these theoretical expressions, we built a likelihood-maximization method, called HAPim, for estimating the location of a quantitative trait locus. For each postulated position, the method only requires information from the two flanking markers. Over a wide range of simulation scenarios it was found to be more accurate than a two-marker composite likelihood method. It also performed as well as identity by descent methods, whilst being valuable in a wider range of populations. Our method makes efficient use of marker information, and can be valuable for fine mapping purposes. Its performance is increased if multiallelic markers are available. Several improvements can be developed to account for more complex evolution scenarios or provide robust confidence intervals for the location estimates.

  17. Verification of a Constraint Force Equation Methodology for Modeling Multi-Body Stage Separation

    NASA Technical Reports Server (NTRS)

    Tartabini, Paul V.; Roithmayr, Carlos; Toniolo, Matthew D.; Karlgaard, Christopher; Pamadi, Bandu N.

    2008-01-01

    This paper discusses the verification of the Constraint Force Equation (CFE) methodology and its implementation in the Program to Optimize Simulated Trajectories II (POST2) for multibody separation problems using three specially designed test cases. The first test case involves two rigid bodies connected by a fixed joint; the second case involves two rigid bodies connected with a universal joint; and the third test case is that of Mach 7 separation of the Hyper-X vehicle. For the first two cases, the POST2/CFE solutions compared well with those obtained using industry standard benchmark codes, namely AUTOLEV and ADAMS. For the Hyper-X case, the POST2/CFE solutions were in reasonable agreement with the flight test data. The CFE implementation in POST2 facilitates the analysis and simulation of stage separation as an integral part of POST2 for seamless end-to-end simulations of launch vehicle trajectories.

  18. Confidence intervals and sample size calculations for the standardized mean difference effect size between two normal populations under heteroscedasticity.

    PubMed

    Shieh, G

    2013-12-01

    The use of effect sizes and associated confidence intervals in all empirical research has been strongly emphasized by journal publication guidelines. To help advance theory and practice in the social sciences, this article describes an improved procedure for constructing confidence intervals of the standardized mean difference effect size between two independent normal populations with unknown and possibly unequal variances. The presented approach has advantages over the existing formula in both theoretical justification and computational simplicity. In addition, simulation results show that the suggested one- and two-sided confidence intervals are more accurate in achieving the nominal coverage probability. The proposed estimation method provides a feasible alternative to the most commonly used measure of Cohen's d and the corresponding interval procedure when the assumption of homogeneous variances is not tenable. To further improve the potential applicability of the suggested methodology, the sample size procedures for precise interval estimation of the standardized mean difference are also delineated. The desired precision of a confidence interval is assessed with respect to the control of expected width and to the assurance probability of interval width within a designated value. Supplementary computer programs are developed to aid in the usefulness and implementation of the introduced techniques.

  19. Design of a short nonuniform acquisition protocol for quantitative analysis in dynamic cardiac SPECT imaging - a retrospective 123 I-MIBG animal study.

    PubMed

    Zan, Yunlong; Long, Yong; Chen, Kewei; Li, Biao; Huang, Qiu; Gullberg, Grant T

    2017-07-01

    Our previous works have found that quantitative analysis of 123 I-MIBG kinetics in the rat heart with dynamic single-photon emission computed tomography (SPECT) offers the potential to quantify the innervation integrity at an early stage of left ventricular hypertrophy. However, conventional protocols involving a long acquisition time for dynamic imaging reduce the animal survival rate and thus make longitudinal analysis difficult. The goal of this work was to develop a procedure to reduce the total acquisition time by selecting nonuniform acquisition times for projection views while maintaining the accuracy and precision of estimated physiologic parameters. Taking dynamic cardiac imaging with 123 I-MIBG in rats as an example, we generated time activity curves (TACs) of regions of interest (ROIs) as ground truths based on a direct four-dimensional reconstruction of experimental data acquired from a rotating SPECT camera, where TACs represented as the coefficients of B-spline basis functions were used to estimate compartmental model parameters. By iteratively adjusting the knots (i.e., control points) of B-spline basis functions, new TACs were created according to two rules: accuracy and precision. The accuracy criterion allocates the knots to achieve low relative entropy between the estimated left ventricular blood pool TAC and its ground truth so that the estimated input function approximates its real value and thus the procedure yields an accurate estimate of model parameters. The precision criterion, via the D-optimal method, forces the estimated parameters to be as precise as possible, with minimum variances. Based on the final knots obtained, a new protocol of 30 min was built with a shorter acquisition time that maintained a 5% error in estimating rate constants of the compartment model. This was evaluated through digital simulations. The simulation results showed that our method was able to reduce the acquisition time from 100 to 30 min for the cardiac study of rats with 123 I-MIBG. Compared to a uniform interval dynamic SPECT protocol (1 s acquisition interval, 30 min acquisition time), the newly proposed protocol with nonuniform interval achieved comparable (K1 and k2, P = 0.5745 for K1 and P = 0.0604 for k2) or better (Distribution Volume, DV, P = 0.0004) performance for parameter estimates with less storage and shorter computational time. In this study, a procedure was devised to shorten the acquisition time while maintaining the accuracy and precision of estimated physiologic parameters in dynamic SPECT imaging. The procedure was designed for 123 I-MIBG cardiac imaging in rat studies; however, it has the potential to be extended to other applications, including patient studies involving the acquisition of dynamic SPECT data. © 2017 American Association of Physicists in Medicine.

  20. Test Statistics and Confidence Intervals to Establish Noninferiority between Treatments with Ordinal Categorical Data.

    PubMed

    Zhang, Fanghong; Miyaoka, Etsuo; Huang, Fuping; Tanaka, Yutaka

    2015-01-01

    The problem for establishing noninferiority is discussed between a new treatment and a standard (control) treatment with ordinal categorical data. A measure of treatment effect is used and a method of specifying noninferiority margin for the measure is provided. Two Z-type test statistics are proposed where the estimation of variance is constructed under the shifted null hypothesis using U-statistics. Furthermore, the confidence interval and the sample size formula are given based on the proposed test statistics. The proposed procedure is applied to a dataset from a clinical trial. A simulation study is conducted to compare the performance of the proposed test statistics with that of the existing ones, and the results show that the proposed test statistics are better in terms of the deviation from nominal level and the power.

  1. Event simulation based on three-fluid hydrodynamics for collisions at energies available at the Dubna Nuclotron-based Ion Collider Facility and at the Facility for Antiproton and Ion Research in Darmstadt

    NASA Astrophysics Data System (ADS)

    Batyuk, P.; Blaschke, D.; Bleicher, M.; Ivanov, Yu. B.; Karpenko, Iu.; Merts, S.; Nahrgang, M.; Petersen, H.; Rogachevsky, O.

    2016-10-01

    We present an event generator based on the three-fluid hydrodynamics approach for the early stage of the collision, followed by a particlization at the hydrodynamic decoupling surface to join to a microscopic transport model, ultrarelativistic quantum molecular dynamics, to account for hadronic final-state interactions. We present first results for nuclear collisions of the Facility for Antiproton and Ion Research-Nuclotron-based Ion Collider Facility energy scan program (Au+Au collisions, √{sN N}=4 -11 GeV ). We address the directed flow of protons and pions as well as the proton rapidity distribution for two model equations of state, one with a first-order phase transition and the other with a crossover-type softening at high densities. The new simulation program has the unique feature that it can describe a hadron-to-quark matter transition which proceeds in the baryon stopping regime that is not accessible to previous simulation programs designed for higher energies.

  2. Optimizing some 3-stage W-methods for the time integration of PDEs

    NASA Astrophysics Data System (ADS)

    Gonzalez-Pinto, S.; Hernandez-Abreu, D.; Perez-Rodriguez, S.

    2017-07-01

    The optimization of some W-methods for the time integration of time-dependent PDEs in several spatial variables is considered. In [2, Theorem 1] several three-parametric families of three-stage W-methods for the integration of IVPs in ODEs were studied. Besides, the optimization of several specific methods for PDEs when the Approximate Matrix Factorization Splitting (AMF) is used to define the approximate Jacobian matrix (W ≈ fy(yn)) was carried out. Also, some convergence and stability properties were presented [2]. The derived methods were optimized on the base that the underlying explicit Runge-Kutta method is the one having the largest Monotonicity interval among the thee-stage order three Runge-Kutta methods [1]. Here, we propose an optimization of the methods by imposing some additional order condition [7] to keep order three for parabolic PDE problems [6] but at the price of reducing substantially the length of the nonlinear Monotonicity interval of the underlying explicit Runge-Kutta method.

  3. Simulation of Tropical Pacific and Atlantic Oceans Using a HYbrid Coordinate Ocean Model

    DTIC Science & Technology

    2005-01-01

    with respect to cotemporal 1m temperature measured by buoys. The cli- matology was created by averaging into monthly means, then calculating...inconsistency could result in part from the different temporal averaging intervals of the two temperature climatologies. This question is further assessed in...observational temperature datasets (drifter and Path- finder) have different temporal averaging intervals. This question is further assessed in

  4. Stochastic models to demonstrate the effect of motivated testing on HIV incidence estimates using the serological testing algorithm for recent HIV seroconversion (STARHS).

    PubMed

    White, Edward W; Lumley, Thomas; Goodreau, Steven M; Goldbaum, Gary; Hawes, Stephen E

    2010-12-01

    To produce valid seroincidence estimates, the serological testing algorithm for recent HIV seroconversion (STARHS) assumes independence between infection and testing, which may be absent in clinical data. STARHS estimates are generally greater than cohort-based estimates of incidence from observable person-time and diagnosis dates. The authors constructed a series of partial stochastic models to examine whether testing motivated by suspicion of infection could bias STARHS. One thousand Monte Carlo simulations of 10,000 men who have sex with men were generated using parameters for HIV incidence and testing frequency from data from a clinical testing population in Seattle. In one set of simulations, infection and testing dates were independent. In another set, some intertest intervals were abbreviated to reflect the distribution of intervals between suspected HIV exposure and testing in a group of Seattle men who have sex with men recently diagnosed as having HIV. Both estimation methods were applied to the simulated datasets. Both cohort-based and STARHS incidence estimates were calculated using the simulated data and compared with previously calculated, empirical cohort-based and STARHS seroincidence estimates from the clinical testing population. Under simulated independence between infection and testing, cohort-based and STARHS incidence estimates resembled cohort estimates from the clinical dataset. Under simulated motivated testing, cohort-based estimates remained unchanged, but STARHS estimates were inflated similar to empirical STARHS estimates. Varying motivation parameters appreciably affected STARHS incidence estimates, but not cohort-based estimates. Cohort-based incidence estimates are robust against dependence between testing and acquisition of infection, whereas STARHS incidence estimates are not.

  5. Efficient Fourier-based algorithms for time-periodic unsteady problems

    NASA Astrophysics Data System (ADS)

    Gopinath, Arathi Kamath

    2007-12-01

    This dissertation work proposes two algorithms for the simulation of time-periodic unsteady problems via the solution of Unsteady Reynolds-Averaged Navier-Stokes (URANS) equations. These algorithms use a Fourier representation in time and hence solve for the periodic state directly without resolving transients (which consume most of the resources in a time-accurate scheme). In contrast to conventional Fourier-based techniques which solve the governing equations in frequency space, the new algorithms perform all the calculations in the time domain, and hence require minimal modifications to an existing solver. The complete space-time solution is obtained by iterating in a fifth pseudo-time dimension. Various time-periodic problems such as helicopter rotors, wind turbines, turbomachinery and flapping-wings can be simulated using the Time Spectral method. The algorithm is first validated using pitching airfoil/wing test cases. The method is further extended to turbomachinery problems, and computational results verified by comparison with a time-accurate calculation. The technique can be very memory intensive for large problems, since the solution is computed (and hence stored) simultaneously at all time levels. Often, the blade counts of a turbomachine are rescaled such that a periodic fraction of the annulus can be solved. This approximation enables the solution to be obtained at a fraction of the cost of a full-scale time-accurate solution. For a viscous computation over a three-dimensional single-stage rescaled compressor, an order of magnitude savings is achieved. The second algorithm, the reduced-order Harmonic Balance method is applicable only to turbomachinery flows, and offers even larger computational savings than the Time Spectral method. It simulates the true geometry of the turbomachine using only one blade passage per blade row as the computational domain. In each blade row of the turbomachine, only the dominant frequencies are resolved, namely, combinations of neighbor's blade passing. An appropriate set of frequencies can be chosen by the analyst/designer based on a trade-off between accuracy and computational resources available. A cost comparison with a time-accurate computation for an Euler calculation on a two-dimensional multi-stage compressor obtained an order of magnitude savings, and a RANS calculation on a three-dimensional single-stage compressor achieved two orders of magnitude savings, with comparable accuracy.

  6. Interaction Between Domperidone and Ketoconazole: Toward Prediction of Consequent QTc Prolongation Using Purely In Vitro Information

    PubMed Central

    Mishra, H; Polak, S; Jamei, M; Rostami-Hodjegan, A

    2014-01-01

    We aimed to investigate the application of combined mechanistic pharmacokinetic (PK) and pharmacodynamic (PD) modeling and simulation in predicting the domperidone (DOM) triggered pseudo-electrocardiogram modification in the presence of a CYP3A inhibitor, ketoconazole (KETO), using in vitro–in vivo extrapolation. In vitro metabolic and inhibitory data were incorporated into physiologically based pharmacokinetic (PBPK) models within Simcyp to simulate time course of plasma DOM and KETO concentrations when administered alone or in combination with KETO (DOM+KETO). Simulated DOM concentrations in plasma were used to predict changes in gender-specific QTcF (Fridericia correction) intervals within the Cardiac Safety Simulator platform taking into consideration DOM, KETO, and DOM+KETO triggered inhibition of multiple ionic currents in population. Combination of in vitro–in vivo extrapolation, PBPK, and systems pharmacology of electric currents in the heart was able to predict the direction and magnitude of PK and PD changes under coadministration of the two drugs although some disparities were detected. PMID:25116274

  7. Full cycle trigonometric function on Intel Quartus II Verilog

    NASA Astrophysics Data System (ADS)

    Mustapha, Muhazam; Zulkarnain, Nur Antasha

    2018-02-01

    This paper discusses about an improvement of a previous research on hardware based trigonometric calculations. Tangent function will also be implemented to get a complete set. The functions have been simulated using Quartus II where the result will be compared to the previous work. The number of bits has also been extended for each trigonometric function. The design is based on RTL due to its resource efficient nature. At earlier stage, a technology independent test bench simulation was conducted on ModelSim due to its convenience in capturing simulation data so that accuracy information can be obtained. On second stage, Intel/Altera Quartus II will be used to simulate on technology dependent platform, particularly on the one belonging to Intel/Altera itself. Real data on no. logic elements used and propagation delay have also been obtained.

  8. Computer simulations in the high school: students' cognitive stages, science process skills and academic achievement in microbiology

    NASA Astrophysics Data System (ADS)

    Huppert, J.; Michal Lomask, S.; Lazarowitz, R.

    2002-08-01

    Computer-assisted learning, including simulated experiments, has great potential to address the problem solving process which is a complex activity. It requires a highly structured approach in order to understand the use of simulations as an instructional device. This study is based on a computer simulation program, 'The Growth Curve of Microorganisms', which required tenth grade biology students to use problem solving skills whilst simultaneously manipulating three independent variables in one simulated experiment. The aims were to investigate the computer simulation's impact on students' academic achievement and on their mastery of science process skills in relation to their cognitive stages. The results indicate that the concrete and transition operational students in the experimental group achieved significantly higher academic achievement than their counterparts in the control group. The higher the cognitive operational stage, the higher students' achievement was, except in the control group where students in the concrete and transition operational stages did not differ. Girls achieved equally with the boys in the experimental group. Students' academic achievement may indicate the potential impact a computer simulation program can have, enabling students with low reasoning abilities to cope successfully with learning concepts and principles in science which require high cognitive skills.

  9. A DACE study on a three stage metal forming process made of Sandvik Nanoflex™

    NASA Astrophysics Data System (ADS)

    Post, J.; Klaseboer, G.; Stinstra, E.; Huétink, J.

    2004-06-01

    Sandvik Nanoflex™ combines good corrosion resistance with high strength. The steel has good deformability in austenitic conditions. This material belongs to the group of metastable austenites, so during deformation a strain-induced transformation into martensite takes place. After deformation, the transformation continues as a result of internal residual stresses. Depending on the heat treatment, this stress-assisted transformation is more or less autocatalytic. Both transformations are stress-state, temperature and crystal orientation dependent. This article presents a constitutive model for this steel, based on the macroscopic material behaviour measured by inductive measurements. Both the stress-assisted and the strain-induced transformation to martensite are incorporated in this model. Path-dependent work hardening is also taken into account, together with the inheritance of the dislocations from one phase to the other. The model is implemented in an internal Philips code called CRYSTAL for doing simulations. A multi-stage metal forming process is simulated. The process consists of different forming steps with intervals between them to simulate the waiting time between the different metal forming steps. During the engineering process of a high precision metal formed product often questions arise about the relation between the scatter on the initial parameters, like standard deviation on the strip thickness, yield stress etc, and the product accuracy. This becomes even more complex if the material is: • instable, • the transformation rate depends on the stress state, which is related to friction, • the transformation rate depends on the temperature, which is related to deformation heat and the heat distribution during the entire process. A way to get more understanding in these phenomena in relation to the process is doing a process window study, using DACE (Design and Analysis of Computer Experiments). In this article an example is given how to make a DACE study on a a three stage metal forming process, using a distributed computing technique. The method is shown, together with some results. The problem is focused on the influence of the transformation rate, transformation plasticity and dilatation strain on the product accuracy.

  10. Simulation of pseudo-CT images based on deformable image registration of ultrasound images: A proof of concept for transabdominal ultrasound imaging of the prostate during radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meer, Skadi van der; Camps, Saskia M.; Oncology Solutions Department, Philips Research, High Tech Campus 34, Eindhoven 5656 AE

    Purpose: Imaging of patient anatomy during treatment is a necessity for position verification and for adaptive radiotherapy based on daily dose recalculation. Ultrasound (US) image guided radiotherapy systems are currently available to collect US images at the simulation stage (US{sub sim}), coregistered with the simulation computed tomography (CT), and during all treatment fractions. The authors hypothesize that a deformation field derived from US-based deformable image registration can be used to create a daily pseudo-CT (CT{sub ps}) image that is more representative of the patients’ geometry during treatment than the CT acquired at simulation stage (CT{sub sim}). Methods: The three prostatemore » patients, considered to evaluate this hypothesis, had coregistered CT and US scans on various days. In particular, two patients had two US–CT datasets each and the third one had five US–CT datasets. Deformation fields were computed between pairs of US images of the same patient and then applied to the corresponding US{sub sim} scan to yield a new deformed CT{sub ps} scan. The original treatment plans were used to recalculate dose distributions in the simulation, deformed and ground truth CT (CT{sub gt}) images to compare dice similarity coefficients, maximum absolute distance, and mean absolute distance on CT delineations and gamma index (γ) evaluations on both the Hounsfield units (HUs) and the dose. Results: In the majority, deformation did improve the results for all three evaluation methods. The change in gamma failure for dose (γ{sub Dose}, 3%, 3 mm) ranged from an improvement of 11.2% in the prostate volume to a deterioration of 1.3% in the prostate and bladder. The change in gamma failure for the CT images (γ{sub CT}, 50 HU, 3 mm) ranged from an improvement of 20.5% in the anus and rectum to a deterioration of 3.2% in the prostate. Conclusions: This new technique may generate CT{sub ps} images that are more representative of the actual patient anatomy than the CT{sub sim} scan.« less

  11. High-speed multishot pellet injector prototype for the Frascati Tokamak Upgrade

    NASA Astrophysics Data System (ADS)

    Frattolillo, A.; Migliori, S.; Scaramuzzi, F.; Angelone, G.; Baldarelli, M.; Capobianchi, M.; Cardoni, P.; Domma, C.; Mori, L.; Ronci, G.

    1998-07-01

    The Frascati Tokamak Upgrade (FTU) may require multiple high-speed pellet injection in order to achieve quasi-steady-state conditions. A research and development program was thus being pursued at ENEA Frascati, aimed at developing a multishot two-stage pellet injector (MPI), featuring eight "pipe gun" barrels and eight small two-stage pneumatic guns. According to FTU requirements, the final goal is to simultaneously produce up to eight D2 pellets, and then deliver them during a plasma pulse (1 s) with any time schedule, at speeds in the 1-2.5 km/s range. A prototype was constructed and tested to demonstrate the feasibility of the concept, and optimize pellet formation and firing sequences. This laboratory facility was automatically operated by means of a programmable logic controller (PLC), and had a full eight-shot capability. However, it was equipped as a first approach with only four two-stage guns. In this article we will describe in detail the guidelines of the MPI prototype design, which were strongly influenced by some external constraints. We will also report on the results of the experimental campaign, during which the feasibility of such a two-stage MPI was demonstrated. Sequences of four intact D2 pellets in the 1.2-1.6 mm size range, fired at time intervals of a few tens up to a few hundreds of ms, were routinely delivered in a laboratory experiment at injection speeds above 2.5 km/s, with good reproducibility and satisfactory aiming dispersion. Some preliminary effort to address the problem of propellant gas handling, based on an innovative approach, gave encouraging results, and work is in progress to carry out an experiment to definitely test the feasibility of this concept.

  12. A Comparison of Two-Stage Approaches for Fitting Nonlinear Ordinary Differential Equation (ODE) Models with Mixed Effects

    PubMed Central

    Chow, Sy-Miin; Bendezú, Jason J.; Cole, Pamela M.; Ram, Nilam

    2016-01-01

    Several approaches currently exist for estimating the derivatives of observed data for model exploration purposes, including functional data analysis (FDA), generalized local linear approximation (GLLA), and generalized orthogonal local derivative approximation (GOLD). These derivative estimation procedures can be used in a two-stage process to fit mixed effects ordinary differential equation (ODE) models. While the performance and utility of these routines for estimating linear ODEs have been established, they have not yet been evaluated in the context of nonlinear ODEs with mixed effects. We compared properties of the GLLA and GOLD to an FDA-based two-stage approach denoted herein as functional ordinary differential equation with mixed effects (FODEmixed) in a Monte Carlo study using a nonlinear coupled oscillators model with mixed effects. Simulation results showed that overall, the FODEmixed outperformed both the GLLA and GOLD across all the embedding dimensions considered, but a novel use of a fourth-order GLLA approach combined with very high embedding dimensions yielded estimation results that almost paralleled those from the FODEmixed. We discuss the strengths and limitations of each approach and demonstrate how output from each stage of FODEmixed may be used to inform empirical modeling of young children’s self-regulation. PMID:27391255

  13. A Comparison of Two-Stage Approaches for Fitting Nonlinear Ordinary Differential Equation Models with Mixed Effects.

    PubMed

    Chow, Sy-Miin; Bendezú, Jason J; Cole, Pamela M; Ram, Nilam

    2016-01-01

    Several approaches exist for estimating the derivatives of observed data for model exploration purposes, including functional data analysis (FDA; Ramsay & Silverman, 2005 ), generalized local linear approximation (GLLA; Boker, Deboeck, Edler, & Peel, 2010 ), and generalized orthogonal local derivative approximation (GOLD; Deboeck, 2010 ). These derivative estimation procedures can be used in a two-stage process to fit mixed effects ordinary differential equation (ODE) models. While the performance and utility of these routines for estimating linear ODEs have been established, they have not yet been evaluated in the context of nonlinear ODEs with mixed effects. We compared properties of the GLLA and GOLD to an FDA-based two-stage approach denoted herein as functional ordinary differential equation with mixed effects (FODEmixed) in a Monte Carlo (MC) study using a nonlinear coupled oscillators model with mixed effects. Simulation results showed that overall, the FODEmixed outperformed both the GLLA and GOLD across all the embedding dimensions considered, but a novel use of a fourth-order GLLA approach combined with very high embedding dimensions yielded estimation results that almost paralleled those from the FODEmixed. We discuss the strengths and limitations of each approach and demonstrate how output from each stage of FODEmixed may be used to inform empirical modeling of young children's self-regulation.

  14. Research on the measurement technology of effective arm length of swing arm profilometer

    NASA Astrophysics Data System (ADS)

    Chen, Lin; Jing, Hongwei; Wei, Zhongwei; Li, Jie; Cao, Xuedong

    2014-09-01

    When the swing arm profilometer(SAP) measuring the mirror, the effective arm length of SAP which haves an obvious influence on the measurement results of the mirror surface shape needs to be measured accurately. It requires the measurement uncertainty of the effective arm length to reach 10μm in order to meet the measurement requirements, in this paper, we present a kind of technology based on laser tracker to measure the effective arm length of SAP. When the swing arm rotates around the shaft axis of swing arm rotary stage, the probe and two laser tracker balls form three sections of circular arc around the shaft axis of swing arm rotary stage in space. Laser tracker tracks and measures the circular arcs of two laser tracker balls, the center coordinates of the circular plane of circular arc can be calculated by data processing. The linear equation that passes through the two center coordinates is the equation of the shaft axis of rotary stage, the vertical distance from the probe to the shaft axis of rotary stage which can be calculated refer to the equation from the point to the line is the effective arm length. After Matlab simulation, this measurement method can meet the measurement accuracy.

  15. Time interval measurement device based on surface acoustic wave filter excitation, providing 1 ps precision and stability.

    PubMed

    Panek, Petr; Prochazka, Ivan

    2007-09-01

    This article deals with the time interval measurement device, which is based on a surface acoustic wave (SAW) filter as a time interpolator. The operating principle is based on the fact that a transversal SAW filter excited by a short pulse can generate a finite signal with highly suppressed spectra outside a narrow frequency band. If the responses to two excitations are sampled at clock ticks, they can be precisely reconstructed from a finite number of samples and then compared so as to determine the time interval between the two excitations. We have designed and constructed a two-channel time interval measurement device which allows independent timing of two events and evaluation of the time interval between them. The device has been constructed using commercially available components. The experimental results proved the concept. We have assessed the single-shot time interval measurement precision of 1.3 ps rms that corresponds to the time of arrival precision of 0.9 ps rms in each channel. The temperature drift of the measured time interval on temperature is lower than 0.5 ps/K, and the long term stability is better than +/-0.2 ps/h. These are to our knowledge the best values reported for the time interval measurement device. The results are in good agreement with the error budget based on the theoretical analysis.

  16. Shot Peening Numerical Simulation of Aircraft Aluminum Alloy Structure

    NASA Astrophysics Data System (ADS)

    Liu, Yong; Lv, Sheng-Li; Zhang, Wei

    2018-03-01

    After shot peening, the 7050 aluminum alloy has good anti-fatigue and anti-stress corrosion properties. In the shot peening process, the pellet collides with target material randomly, and generated residual stress distribution on the target material surface, which has great significance to improve material property. In this paper, a simplified numerical simulation model of shot peening was established. The influence of pellet collision velocity, pellet collision position and pellet collision time interval on the residual stress of shot peening was studied, which is simulated by the ANSYS/LS-DYNA software. The analysis results show that different velocity, different positions and different time intervals have great influence on the residual stress after shot peening. Comparing with the numerical simulation results based on Kriging model, the accuracy of the simulation results in this paper was verified. This study provides a reference for the optimization of the shot peening process, and makes an effective exploration for the precise shot peening numerical simulation.

  17. Neural methods based on modified reputation rules for detection and identification of intrusion attacks in wireless ad hoc sensor networks

    NASA Astrophysics Data System (ADS)

    Hortos, William S.

    2010-04-01

    Determining methods to secure the process of data fusion against attacks by compromised nodes in wireless sensor networks (WSNs) and to quantify the uncertainty that may exist in the aggregation results is a critical issue in mitigating the effects of intrusion attacks. Published research has introduced the concept of the trustworthiness (reputation) of a single sensor node. Reputation is evaluated using an information-theoretic concept, the Kullback- Leibler (KL) distance. Reputation is added to the set of security features. In data aggregation, an opinion, a metric of the degree of belief, is generated to represent the uncertainty in the aggregation result. As aggregate information is disseminated along routes to the sink node(s), its corresponding opinion is propagated and regulated by Josang's belief model. By applying subjective logic on the opinion to manage trust propagation, the uncertainty inherent in aggregation results can be quantified for use in decision making. The concepts of reputation and opinion are modified to allow their application to a class of dynamic WSNs. Using reputation as a factor in determining interim aggregate information is equivalent to implementation of a reputation-based security filter at each processing stage of data fusion, thereby improving the intrusion detection and identification results based on unsupervised techniques. In particular, the reputation-based version of the probabilistic neural network (PNN) learns the signature of normal network traffic with the random probability weights normally used in the PNN replaced by the trust-based quantified reputations of sensor data or subsequent aggregation results generated by the sequential implementation of a version of Josang's belief model. A two-stage, intrusion detection and identification algorithm is implemented to overcome the problems of large sensor data loads and resource restrictions in WSNs. Performance of the twostage algorithm is assessed in simulations of WSN scenarios with multiple sensors at edge nodes for known intrusion attacks. Simulation results show improved robustness of the two-stage design based on reputation-based NNs to intrusion anomalies from compromised nodes and external intrusion attacks.

  18. Grouping methods for estimating the prevalences of rare traits from complex survey data that preserve confidentiality of respondents.

    PubMed

    Hyun, Noorie; Gastwirth, Joseph L; Graubard, Barry I

    2018-03-26

    Originally, 2-stage group testing was developed for efficiently screening individuals for a disease. In response to the HIV/AIDS epidemic, 1-stage group testing was adopted for estimating prevalences of a single or multiple traits from testing groups of size q, so individuals were not tested. This paper extends the methodology of 1-stage group testing to surveys with sample weighted complex multistage-cluster designs. Sample weighted-generalized estimating equations are used to estimate the prevalences of categorical traits while accounting for the error rates inherent in the tests. Two difficulties arise when using group testing in complex samples: (1) How does one weight the results of the test on each group as the sample weights will differ among observations in the same group. Furthermore, if the sample weights are related to positivity of the diagnostic test, then group-level weighting is needed to reduce bias in the prevalence estimation; (2) How does one form groups that will allow accurate estimation of the standard errors of prevalence estimates under multistage-cluster sampling allowing for intracluster correlation of the test results. We study 5 different grouping methods to address the weighting and cluster sampling aspects of complex designed samples. Finite sample properties of the estimators of prevalences, variances, and confidence interval coverage for these grouping methods are studied using simulations. National Health and Nutrition Examination Survey data are used to illustrate the methods. Copyright © 2018 John Wiley & Sons, Ltd.

  19. Development, current applications and future roles of biorelevant two-stage in vitro testing in drug development.

    PubMed

    Fiolka, Tom; Dressman, Jennifer

    2018-03-01

    Various types of two stage in vitro testing have been used in a number of experimental settings. In addition to its application in quality control and for regulatory purposes, two-stage in vitro testing has also been shown to be a valuable technique to evaluate the supersaturation and precipitation behavior of poorly soluble drugs during drug development. The so-called 'transfer model', which is an example of two-stage testing, has provided valuable information about the in vivo performance of poorly soluble, weakly basic drugs by simulating the gastrointestinal drug transit from the stomach into the small intestine with a peristaltic pump. The evolution of the transfer model has resulted in various modifications of the experimental model set-up. Concomitantly, various research groups have developed simplified approaches to two-stage testing to investigate the supersaturation and precipitation behavior of weakly basic drugs without the necessity of using a transfer pump. Given the diversity among the various two-stage test methods available today, a more harmonized approach needs to be taken to optimize the use of two stage testing at different stages of drug development. © 2018 Royal Pharmaceutical Society.

  20. How do gait frequency and serum-replacement interval affect polyethylene wear in knee-wear simulator tests?

    PubMed

    Reinders, Jörn; Sonntag, Robert; Kretzer, Jan Philippe

    2014-11-01

    Polyethylene wear (PE) is known to be a limiting factor in total joint replacements. However, a standardized wear test (e.g. ISO standard) can only replicate the complex in vivo loading condition in a simplified form. In this study, two different parameters were analyzed: (a) Bovine serum, as a substitute for synovial fluid, is typically replaced every 500,000 cycles. However, a continuous regeneration takes place in vivo. How does serum-replacement interval affect the wear rate of total knee replacements? (b) Patients with an artificial joint show reduced gait frequencies compared to standardized testing. What is the influence of a reduced frequency? Three knee wear tests were run: (a) reference test (ISO), (b) testing with a shortened lubricant replacement interval, (c) testing with reduced frequency. The wear behavior was determined based on gravimetric measurements and wear particle analysis. The results showed that the reduced test frequency only had a small effect on wear behavior. Testing with 1 Hz frequency is therefore a valid method for wear testing. However, testing with a shortened replacement interval nearly doubled the wear rate. Wear particle analysis revealed only small differences in wear particle size between the different tests. Wear particles were not linearly released within one replacement interval. The ISO standard should be revised to address the marked effects of lubricant replacement interval on wear rate.

  1. Development and validation of the simulation-based learning evaluation scale.

    PubMed

    Hung, Chang-Chiao; Liu, Hsiu-Chen; Lin, Chun-Chih; Lee, Bih-O

    2016-05-01

    The instruments that evaluate a student's perception of receiving simulated training are English versions and have not been tested for reliability or validity. The aim of this study was to develop and validate a Chinese version Simulation-Based Learning Evaluation Scale (SBLES). Four stages were conducted to develop and validate the SBLES. First, specific desired competencies were identified according to the National League for Nursing and Taiwan Nursing Accreditation Council core competencies. Next, the initial item pool was comprised of 50 items related to simulation that were drawn from the literature of core competencies. Content validity was established by use of an expert panel. Finally, exploratory factor analysis and confirmatory factor analysis were conducted for construct validity, and Cronbach's coefficient alpha determined the scale's internal consistency reliability. Two hundred and fifty students who had experienced simulation-based learning were invited to participate in this study. Two hundred and twenty-five students completed and returned questionnaires (response rate=90%). Six items were deleted from the initial item pool and one was added after an expert panel review. Exploratory factor analysis with varimax rotation revealed 37 items remaining in five factors which accounted for 67% of the variance. The construct validity of SBLES was substantiated in a confirmatory factor analysis that revealed a good fit of the hypothesized factor structure. The findings tally with the criterion of convergent and discriminant validity. The range of internal consistency for five subscales was .90 to .93. Items were rated on a 5-point scale from 1 (strongly disagree) to 5 (strongly agree). The results of this study indicate that the SBLES is valid and reliable. The authors recommend that the scale could be applied in the nursing school to evaluate the effectiveness of simulation-based learning curricula. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. The Nature of Phonological Encoding During Spoken Word Retrieval.

    ERIC Educational Resources Information Center

    Sullivan, Michael P.; Riffel, Brian

    1999-01-01

    Examined whether phonological selection occurs sequentially or in parallel. College students named picture primes and targets, with varied response stimulus intervals between primes and targets. Results were consistent with Dell's (1988) two-stage sequential model of encoding, which shows an initial parallel activation within a lexical network…

  3. Simulation and Validation of Injection-Compression Filling Stage of Liquid Moulding with Fast Curing Resins

    NASA Astrophysics Data System (ADS)

    Martin, Ffion A.; Warrior, Nicholas A.; Simacek, Pavel; Advani, Suresh; Hughes, Adrian; Darlington, Roger; Senan, Eissa

    2018-03-01

    Very short manufacture cycle times are required if continuous carbon fibre and epoxy composite components are to be economically viable solutions for high volume composite production for the automotive industry. Here, a manufacturing process variant of resin transfer moulding (RTM), targets a reduction of in-mould manufacture time by reducing the time to inject and cure components. The process involves two stages; resin injection followed by compression. A flow simulation methodology using an RTM solver for the process has been developed. This paper compares the simulation prediction to experiments performed using industrial equipment. The issues encountered during the manufacturing are included in the simulation and their sensitivity to the process is explored.

  4. Assessing very high resolution UAV imagery for monitoring forest health during a simulated disease outbreak

    NASA Astrophysics Data System (ADS)

    Dash, Jonathan P.; Watt, Michael S.; Pearse, Grant D.; Heaphy, Marie; Dungey, Heidi S.

    2017-09-01

    Research into remote sensing tools for monitoring physiological stress caused by biotic and abiotic factors is critical for maintaining healthy and highly-productive plantation forests. Significant research has focussed on assessing forest health using remotely sensed data from satellites and manned aircraft. Unmanned aerial vehicles (UAVs) may provide new tools for improved forest health monitoring by providing data with very high temporal and spatial resolutions. These platforms also pose unique challenges and methods for health assessments must be validated before use. In this research, we simulated a disease outbreak in mature Pinus radiata D. Don trees using targeted application of herbicide. The objective was to acquire a time-series simulated disease expression dataset to develop methods for monitoring physiological stress from a UAV platform. Time-series multi-spectral imagery was acquired using a UAV flown over a trial at regular intervals. Traditional field-based health assessments of crown health (density) and needle health (discolouration) were carried out simultaneously by experienced forest health experts. Our results showed that multi-spectral imagery collected from a UAV is useful for identifying physiological stress in mature plantation trees even during the early stages of tree stress. We found that physiological stress could be detected earliest in data from the red edge and near infra-red bands. In contrast to previous findings, red edge data did not offer earlier detection of physiological stress than the near infra-red data. A non-parametric approach was used to model physiological stress based on spectral indices and was found to provide good classification accuracy (weighted kappa = 0.694). This model can be used to map physiological stress based on high-resolution multi-spectral data.

  5. Discovering the Complexity of Capable Faults in Northern Chile

    NASA Astrophysics Data System (ADS)

    Gonzalez, G.; del Río, I. A.; Rojas Orrego, C., Sr.; Astudillo, L. A., Sr.

    2017-12-01

    Great crustal earthquakes (Mw >7.0) in the upper plate of subduction zones are relatively uncommon and less well documented. We hypothesize that crustal earthquakes are poorly represented in the instrumental record because they have long recurrence intervals. In northern Chile, the extreme long-term aridity permits extraordinary preservation of landforms related to fault activity, making this region a primary target to understand how upper plate faults work at subduction zones. To understand how these faults relate to crustal seismicity in the long-term, we have conducted a detailed palaeoseismological study. We performed a palaeoseismological survey integrating trench logging and photogrammetry based on UAVs. Optically stimulated luminescence (OSL) age determinations were practiced for dating deposits linked to faulting. In this contribution we present the study case of two primary faults located in the Coastal Cordillera of northern Chile between Iquique (21ºS) and Antofagasta (24ºS). We estimate the maximum moment magnitude of earthquakes generated in these upper plate faults, their recurrence interval and the fault-slip rate. We conclude that the studied upper plate faults show a complex kinematics on geological timescales. Faults seem to change their kinematics from normal (extension) to reverse (compression) or from normal to transcurrent (compression) according to the stage of subduction earthquake cycle. Normal displacement is related to coseismic stages and compression is linked to interseismic period. As result this complex interaction these faults are capable of generating Mw 7.0 earthquakes, with recurrence times on the order of thousands of years during every stage of the subduction earthquake cycle.

  6. Documentation of the Surface-Water Routing (SWR1) Process for modeling surface-water flow with the U.S. Geological Survey Modular Ground-Water Model (MODFLOW-2005)

    USGS Publications Warehouse

    Hughes, Joseph D.; Langevin, Christian D.; Chartier, Kevin L.; White, Jeremy T.

    2012-01-01

    A flexible Surface-Water Routing (SWR1) Process that solves the continuity equation for one-dimensional and two-dimensional surface-water flow routing has been developed for the U.S. Geological Survey three-dimensional groundwater model, MODFLOW-2005. Simple level- and tilted-pool reservoir routing and a diffusive-wave approximation of the Saint-Venant equations have been implemented. Both methods can be implemented in the same model and the solution method can be simplified to represent constant-stage elements that are functionally equivalent to the standard MODFLOW River or Drain Package boundary conditions. A generic approach has been used to represent surface-water features (reaches) and allows implementation of a variety of geometric forms. One-dimensional geometric forms include rectangular, trapezoidal, and irregular cross section reaches to simulate one-dimensional surface-water features, such as canals and streams. Two-dimensional geometric forms include reaches defined using specified stage-volume-area-perimeter (SVAP) tables and reaches covering entire finite-difference grid cells to simulate two-dimensional surface-water features, such as wetlands and lakes. Specified SVAP tables can be used to represent reaches that are smaller than the finite-difference grid cell (for example, isolated lakes), or reaches that cannot be represented accurately using the defined top of the model. Specified lateral flows (which can represent point and distributed flows) and stage-dependent rainfall and evaporation can be applied to each reach. The SWR1 Process can be used with the MODFLOW Unsaturated Zone Flow (UZF1) Package to permit dynamic simulation of runoff from the land surface to specified reaches. Surface-water/groundwater interactions in the SWR1 Process are mathematically defined to be a function of the difference between simulated stages and groundwater levels, and the specific form of the reach conductance equation used in each reach. Conductance can be specified directly or calculated as a function of the simulated wetted perimeter and defined reach bed hydraulic properties, or as a weighted combination of both reach bed hydraulic properties and horizontal hydraulic conductivity. Each reach can be explicitly coupled to a single specific groundwater-model layer or coupled to multiple groundwater-model layers based on the reach geometry and groundwater-model layer elevations in the row and column containing the reach. Surface-water flow between reservoirs is simulated using control structures. Surface-water flow between reaches, simulated by the diffusive-wave approximation, can also be simulated using control structures. A variety of control structures have been included in the SWR1 Process and include (1) excess-volume structures, (2) uncontrolled-discharge structures, (3) pumps, (4) defined stage-discharge relations, (5) culverts, (6) fixed- or movable-crest weirs, and (7) fixed or operable gated spillways. Multiple control structures can be implemented in individual reaches and are treated as composite flow structures. Solution of the continuity equation at the reach-group scale (a single reach or a user-defined collection of individual reaches) is achieved using exact Newton methods with direct solution methods or exact and inexact Newton methods with Krylov sub-space methods. Newton methods have been used in the SWR1 Process because of their ability to solve nonlinear problems. Multiple SWR1 time steps can be simulated for each MODFLOW time step, and a simple adaptive time-step algorithm, based on user-specified rainfall, stage, flow, or convergence constraints, has been implemented to better resolve surface-water response. A simple linear- or sigmoid-depth scaling approach also has been implemented to account for increased bed roughness at small surface-water depths and to increase numerical stability. A line-search algorithm also has been included to improve the quality of the Newton-step upgrade vector, if possible. The SWR1 Process has been benchmarked against one- and two-dimensional numerical solutions from existing one- and two-dimensional numerical codes that solve the dynamic-wave approximation of the Saint-Venant equations. Two-dimensional solutions test the ability of the SWR1 Process to simulate the response of a surface-water system to (1) steady flow conditions for an inclined surface (solution of Manning's equation), and (2) transient inflow and rainfall for an inclined surface. The one-dimensional solution tests the ability of the SWR1 Process to simulate a looped network with multiple upstream inflows and several control structures. The SWR1 Process also has been compared to a level-pool reservoir solution. A synthetic test problem was developed to evaluate a number of different SWR1 solution options and simulate surface-water/groundwater interaction. The solution approach used in the SWR1 Process may not be applicable for all surface-water/groundwater problems. The SWR1 Process is best suited for modeling long-term changes (days to years) in surface-water and groundwater flow. Use of the SWR1 Process is not recommended for modeling the transient exchange of water between streams and aquifers when local and convective acceleration and other secondary effects (for example, wind and Coriolis forces) are substantial. Dam break evaluations and two-dimensional evaluations of spatially extensive domains are examples where acceleration terms and secondary effects would be significant, respectively.

  7. Optimization and development of a core-in-cup tablet for modulated release of theophylline in simulated gastrointestinal fluids.

    PubMed

    Danckwerts, M P

    2000-07-01

    A triple-layer core-in-cup tablet that can release theophylline in simulated gastrointestinal (GI) fluids at three distinct rates has been developed. The first layer is an immediate-release layer; the second layer is a sustained-release layer; and the last layer is a boost layer, which was designed to coincide with a higher nocturnal dose of theophylline. The study consisted of two stages. The first stage optimized the sustained-release layer of the tablet to release theophylline over a period of 12 hr. Results from this stage indicated that 30% w/w acacia gum was the best polymer and concentration to use when compressed to a hardness of 50 N/m2. The second stage of the study involved the investigation of the final triple-layer core-in-cup tablet to release theophylline at three different rates in simulated GI fluids. The triple-layer modulated core-in-cup tablet successfully released drug in simulated fluids at an initial rate of 40 mg/min, followed by a rate of 0.4085 mg/min, in simulated gastric fluid TS, 0.1860 mg/min in simulated intestinal fluid TS, and finally by a boosted rate of 0.6952 mg/min.

  8. Study of premixing phase of steam explosion with JASMINE code in ALPHA program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moriyama, Kiyofumi; Yamano, Norihiro; Maruyama, Yu

    Premixing phase of steam explosion has been studied in ALPHA Program at Japan Atomic Energy Research Institute (JAERI). An analytical model to simulate the premixing phase, JASMINE (JAERI Simulator for Multiphase Interaction and Explosion), has been developed based on a multi-dimensional multi-phase thermal hydraulics code MISTRAL (by Fuji Research Institute Co.). The original code was extended to simulate the physics in the premixing phenomena. The first stage of the code validation was performed by analyzing two mixing experiments with solid particles and water: the isothermal experiment by Gilbertson et al. (1992) and the hot particle experiment by Angelini et al.more » (1993) (MAGICO). The code predicted reasonably well the experiments. Effectiveness of the TVD scheme employed in the code was also demonstrated.« less

  9. Second-order quadrupolar line shapes under molecular dynamics: An additional transition in the extremely fast regime.

    PubMed

    Hung, Ivan; Wu, Gang; Gan, Zhehong

    NMR spectroscopy is a powerful tool for probing molecular dynamics. For the classic case of two-site exchange, NMR spectra go through the transition from exchange broadening through coalescence and then motional narrowing as the exchange rate increases passing through the difference between the resonance frequencies of the two sites. For central-transition spectra of half-integer quadrupolar nuclei in solids, line shape change due to molecular dynamics occurs in two stages. The first stage occurs when the exchange rate is comparable to the second-order quadrupolar interaction. The second spectral transition comes at a faster exchange rate which approaches the Larmor frequency and generally reduces the isotropic quadrupolar shift. Such a two-stage transition phenomenon is unique to half-integer quadrupolar nuclei. A quantum mechanical formalism in full Liouville space is presented to explain the physical origin of the two-stage phenomenon and for use in spectral simulations. Variable-temperature 17 O NMR of solid NaNO 3 in which the NO 3 - ion undergoes 3-fold jumps confirms the two-stage transition process. The spectra of NaNO 3 acquired in the temperature range of 173-413K agree well with simulations using the quantum mechanical formalism. The rate constants for the 3-fold NO 3 - ion jumps span eight orders of magnitude (10 2 -10 10 s -1 ) covering both transitions of the dynamic 17 O line shape. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. June 2002 floods in the Red River of the North basin in northeastern North Dakota and northwestern Minnesota

    USGS Publications Warehouse

    Wiche, Gregg J.; Guttormson, K.G.; Robinson, S.M.; Mitton, G.B.; Bramer, B.J.

    2002-01-01

    Historical peak stages and peak discharges and the June 2002 peak stages, peak discharges, and recurrence intervals are shown in table 1.  The streamflow-gaging stations are listed in downstream order by station number, and station locations are shown in figure 1.  The June 2002 peak stages and peak discharges given in this preliminary report may be revised as site surveys are completed and additional field data are reviewed in the upcoming months.  The peak discharges are used to determine the probability, often expressed in recurrence intervals, that a given discharge will be exceeded in the future.  For example, a flood that has a 1-percent chance of exceedance in any given year would, on the long-term average, be expected to occur only about once a century; therefore, the flood would be termed a "100-year flood."  However, the chance of such a flood occurring in any given year is 1 percent.  Thus, a 100-year flood can occur in successive years at the same location.  In some instances, recurrence interval estimates can be based on periods of regulated flow or made with historic adjustments when historic data are available.

  11. Resource Allocation and Outpatient Appointment Scheduling Using Simulation Optimization

    PubMed Central

    Ling, Teresa Wai Ching; Yeung, Wing Kwan

    2017-01-01

    This paper studies the real-life problems of outpatient clinics having the multiple objectives of minimizing resource overtime, patient waiting time, and waiting area congestion. In the clinic, there are several patient classes, each of which follows different treatment procedure flow paths through a multiphase and multiserver queuing system with scarce staff and limited space. We incorporate the stochastic factors for the probabilities of the patients being diverted into different flow paths, patient punctuality, arrival times, procedure duration, and the number of accompanied visitors. We present a novel two-stage simulation-based heuristic algorithm to assess various tactical and operational decisions for optimizing the multiple objectives. In stage I, we search for a resource allocation plan, and in stage II, we determine a block appointment schedule by patient class and a service discipline for the daily operational level. We also explore the effects of the separate strategies and their integration to identify the best possible combination. The computational experiments are designed on the basis of data from a study of an ophthalmology clinic in a public hospital. Results show that our approach significantly mitigates the undesirable outcomes by integrating the strategies and increasing the resource flexibility at the bottleneck procedures without adding resources. PMID:29104748

  12. Resource Allocation and Outpatient Appointment Scheduling Using Simulation Optimization.

    PubMed

    Lin, Carrie Ka Yuk; Ling, Teresa Wai Ching; Yeung, Wing Kwan

    2017-01-01

    This paper studies the real-life problems of outpatient clinics having the multiple objectives of minimizing resource overtime, patient waiting time, and waiting area congestion. In the clinic, there are several patient classes, each of which follows different treatment procedure flow paths through a multiphase and multiserver queuing system with scarce staff and limited space. We incorporate the stochastic factors for the probabilities of the patients being diverted into different flow paths, patient punctuality, arrival times, procedure duration, and the number of accompanied visitors. We present a novel two-stage simulation-based heuristic algorithm to assess various tactical and operational decisions for optimizing the multiple objectives. In stage I, we search for a resource allocation plan, and in stage II, we determine a block appointment schedule by patient class and a service discipline for the daily operational level. We also explore the effects of the separate strategies and their integration to identify the best possible combination. The computational experiments are designed on the basis of data from a study of an ophthalmology clinic in a public hospital. Results show that our approach significantly mitigates the undesirable outcomes by integrating the strategies and increasing the resource flexibility at the bottleneck procedures without adding resources.

  13. Research in digital adaptive flight controllers

    NASA Technical Reports Server (NTRS)

    Kaufman, H.

    1976-01-01

    A design study of adaptive control logic suitable for implementation in modern airborne digital flight computers was conducted. Both explicit controllers which directly utilize parameter identification and implicit controllers which do not require identification were considered. Extensive analytical and simulation efforts resulted in the recommendation of two explicit digital adaptive flight controllers. Interface weighted least squares estimation procedures with control logic were developed using either optimal regulator theory or with control logic based upon single stage performance indices.

  14. Analytical study of effect of casing treatment on performance of a multistage compressor

    NASA Technical Reports Server (NTRS)

    Snyder, R. W.; Blade, R. J.

    1972-01-01

    The simulation was based on individual stage pressure and efficiency maps. These maps were modified to account for casing treatment effects on the individual stage characteristics. The individual stage maps effects on overall compressor performance were observed. The results show that to improve the performance of the compressor in its normal operating range, casing treatment of the rear stages is required.

  15. Students' Development of Representational Competence Through the Sense of Touch

    NASA Astrophysics Data System (ADS)

    Magana, Alejandra J.; Balachandran, Sadhana

    2017-06-01

    Electromagnetism is an umbrella encapsulating several different concepts like electric current, electric fields and forces, and magnetic fields and forces, among other topics. However, a number of studies in the past have highlighted the poor conceptual understanding of electromagnetism concepts by students even after instruction. This study aims to identify novel forms of "hands-on" instruction that can result in representational competence and conceptual gain. Specifically, this study aimed to identify if the use of visuohaptic simulations can have an effect on student representations of electromagnetic-related concepts. The guiding questions is How do visuohaptic simulations influence undergraduate students' representations of electric forces? Participants included nine undergraduate students from science, technology, or engineering backgrounds who participated in a think-aloud procedure while interacting with a visuohaptic simulation. The think-aloud procedure was divided in three stages, a prediction stage, a minimally visual haptic stage, and a visually enhanced haptic stage. The results of this study suggest that students' accurately characterized and represented the forces felt around a particle, line, and ring charges either in the prediction stage, a minimally visual haptic stage or the visually enhanced haptic stage. Also, some students accurately depicted the three-dimensional nature of the field for each configuration in the two stages that included a tactile mode, where the point charge was the most challenging one.

  16. Long Time to Diagnosis of Medulloblastoma in Children Is Not Associated with Decreased Survival or with Worse Neurological Outcome

    PubMed Central

    Brasme, Jean-Francois; Grill, Jacques; Doz, Francois; Lacour, Brigitte; Valteau-Couanet, Dominique; Gaillard, Stephan; Delalande, Olivier; Aghakhani, Nozar; Puget, Stéphanie; Chalumeau, Martin

    2012-01-01

    Background The long time to diagnosis of medulloblastoma, one of the most frequent brain tumors in children, is the source of painful remorse and sometimes lawsuits. We analyzed its consequences for tumor stage, survival, and sequelae. Patients and Methods This retrospective population-based cohort study included all cases of pediatric medulloblastoma from a region of France between 1990 and 2005. We collected the demographic, clinical, and tumor data and analyzed the relations between the interval from symptom onset until diagnosis, initial disease stage, survival, and neuropsychological and neurological outcome. Results The median interval from symptom onset until diagnosis for the 166 cases was 65 days (interquartile range 31–121, range 3–457). A long interval (defined as longer than the median) was associated with a lower frequency of metastasis in the univariate and multivariate analyses and with a larger tumor volume, desmoplastic histology, and longer survival in the univariate analysis, but not after adjustment for confounding factors. The time to diagnosis was significantly associated with IQ score among survivors. No significant relation was found between the time to diagnosis and neurological disability. In the 62 patients with metastases, a long prediagnosis interval was associated with a higher T stage, infiltration of the fourth ventricle floor, and incomplete surgical resection; it nonetheless did not influence survival significantly in this subgroup. Conclusions We found complex and often inverse relations between time to diagnosis of medulloblastoma in children and initial severity factors, survival, and neuropsychological and neurological outcome. This interval appears due more to the nature of the tumor and its progression than to parental or medical factors. These conclusions should be taken into account in the information provided to parents and in expert assessments produced for malpractice claims. PMID:22485143

  17. GENOA-PFA: Progressive Fracture in Composites Simulated Computationally

    NASA Technical Reports Server (NTRS)

    Murthy, Pappu L. N.

    2000-01-01

    GENOA-PFA is a commercial version of the Composite Durability Structural Analysis (CODSTRAN) computer program that simulates the progression of damage ultimately leading to fracture in polymer-matrix-composite (PMC) material structures under various loading and environmental conditions. GENOA-PFA offers several capabilities not available in other programs developed for this purpose, making it preferable for use in analyzing the durability and damage tolerance of complex PMC structures in which the fiber reinforcements occur in two- and three-dimensional weaves and braids. GENOA-PFA implements a progressive-fracture methodology based on the idea that a structure fails when flaws that may initially be small (even microscopic) grow and/or coalesce to a critical dimension where the structure no longer has an adequate safety margin to avoid catastrophic global fracture. Damage is considered to progress through five stages: (1) initiation, (2) growth, (3) accumulation (coalescence of propagating flaws), (4) stable propagation (up to the critical dimension), and (5) unstable or very rapid propagation (beyond the critical dimension) to catastrophic failure. The computational simulation of progressive failure involves formal procedures for identifying the five different stages of damage and for relating the amount of damage at each stage to the overall behavior of the deteriorating structure. In GENOA-PFA, mathematical modeling of the composite physical behavior involves an integration of simulations at multiple, hierarchical scales ranging from the macroscopic (lamina, laminate, and structure) to the microscopic (fiber, matrix, and fiber/matrix interface), as shown in the figure. The code includes algorithms to simulate the progression of damage from various source defects, including (1) through-the-thickness cracks and (2) voids with edge, pocket, internal, or mixed-mode delaminations.

  18. Investigation on a thermal-coupled two-stage Stirling-type pulse tube cryocooler

    NASA Astrophysics Data System (ADS)

    Yang, Luwei

    2008-11-01

    Multi-stage Stirling-type pulse tube cryocoolers with high frequency (30-60 Hz) are one important direction in recent years. A two-stage Stirling-type pulse tube cryocooler with thermally coupled stages has been designed and established two years ago and some results have been published. In order to study the effect of first stage precooling temperature, related characteristics on performance are experimentally investigated. It shows that at high input power, when the precooling temperature is lower than 110 K, its effect on second stage temperature is quite small. There is also the evident effect of precooling temperature on pulse tube temperature distribution; this is for the first time that author notice the phenomenon. The mean working pressure is investigated and the 12.8 K lowest temperature with 500 W input power and 1.22 MPa average pressure have been gained, this is the lowest reported temperature for high frequency two-stage PTCS. Simulation has reflected upper mentioned typical features in experiments.

  19. Comparison of two adaptive temperature-based replica exchange methods applied to a sharp phase transition of protein unfolding-folding.

    PubMed

    Lee, Michael S; Olson, Mark A

    2011-06-28

    Temperature-based replica exchange (T-ReX) enhances sampling of molecular dynamics simulations by autonomously heating and cooling simulation clients via a Metropolis exchange criterion. A pathological case for T-ReX can occur when a change in state (e.g., folding to unfolding of a protein) has a large energetic difference over a short temperature interval leading to insufficient exchanges amongst replica clients near the transition temperature. One solution is to allow the temperature set to dynamically adapt in the temperature space, thereby enriching the population of clients near the transition temperature. In this work, we evaluated two approaches for adapting the temperature set: a method that equalizes exchange rates over all neighbor temperature pairs and a method that attempts to induce clients to visit all temperatures (dubbed "current maximization") by positioning many clients at or near the transition temperature. As a test case, we simulated the 57-residue SH3 domain of alpha-spectrin. Exchange rate equalization yielded the same unfolding-folding transition temperature as fixed-temperature ReX with much smoother convergence of this value. Surprisingly, the current maximization method yielded a significantly lower transition temperature, in close agreement with experimental observation, likely due to more extensive sampling of the transition state.

  20. Relaxation estimation of RMSD in molecular dynamics immunosimulations.

    PubMed

    Schreiner, Wolfgang; Karch, Rudolf; Knapp, Bernhard; Ilieva, Nevena

    2012-01-01

    Molecular dynamics simulations have to be sufficiently long to draw reliable conclusions. However, no method exists to prove that a simulation has converged. We suggest the method of "lagged RMSD-analysis" as a tool to judge if an MD simulation has not yet run long enough. The analysis is based on RMSD values between pairs of configurations separated by variable time intervals Δt. Unless RMSD(Δt) has reached a stationary shape, the simulation has not yet converged.

Top