Sample records for mixed model procedure

  1. Item Purification in Differential Item Functioning Using Generalized Linear Mixed Models

    ERIC Educational Resources Information Center

    Liu, Qian

    2011-01-01

    For this dissertation, four item purification procedures were implemented onto the generalized linear mixed model for differential item functioning (DIF) analysis, and the performance of these item purification procedures was investigated through a series of simulations. Among the four procedures, forward and generalized linear mixed model (GLMM)…

  2. An Investigation of Item Fit Statistics for Mixed IRT Models

    ERIC Educational Resources Information Center

    Chon, Kyong Hee

    2009-01-01

    The purpose of this study was to investigate procedures for assessing model fit of IRT models for mixed format data. In this study, various IRT model combinations were fitted to data containing both dichotomous and polytomous item responses, and the suitability of the chosen model mixtures was evaluated based on a number of model fit procedures.…

  3. An Efficient Alternative Mixed Randomized Response Procedure

    ERIC Educational Resources Information Center

    Singh, Housila P.; Tarray, Tanveer A.

    2015-01-01

    In this article, we have suggested a new modified mixed randomized response (RR) model and studied its properties. It is shown that the proposed mixed RR model is always more efficient than the Kim and Warde's mixed RR model. The proposed mixed RR model has also been extended to stratified sampling. Numerical illustrations and graphical…

  4. Item Selection and Ability Estimation Procedures for a Mixed-Format Adaptive Test

    ERIC Educational Resources Information Center

    Ho, Tsung-Han; Dodd, Barbara G.

    2012-01-01

    In this study we compared five item selection procedures using three ability estimation methods in the context of a mixed-format adaptive test based on the generalized partial credit model. The item selection procedures used were maximum posterior weighted information, maximum expected information, maximum posterior weighted Kullback-Leibler…

  5. On Fitting Generalized Linear Mixed-effects Models for Binary Responses using Different Statistical Packages

    PubMed Central

    Zhang, Hui; Lu, Naiji; Feng, Changyong; Thurston, Sally W.; Xia, Yinglin; Tu, Xin M.

    2011-01-01

    Summary The generalized linear mixed-effects model (GLMM) is a popular paradigm to extend models for cross-sectional data to a longitudinal setting. When applied to modeling binary responses, different software packages and even different procedures within a package may give quite different results. In this report, we describe the statistical approaches that underlie these different procedures and discuss their strengths and weaknesses when applied to fit correlated binary responses. We then illustrate these considerations by applying these procedures implemented in some popular software packages to simulated and real study data. Our simulation results indicate a lack of reliability for most of the procedures considered, which carries significant implications for applying such popular software packages in practice. PMID:21671252

  6. Fully-coupled analysis of jet mixing problems. Three-dimensional PNS model, SCIP3D

    NASA Technical Reports Server (NTRS)

    Wolf, D. E.; Sinha, N.; Dash, S. M.

    1988-01-01

    Numerical procedures formulated for the analysis of 3D jet mixing problems, as incorporated in the computer model, SCIP3D, are described. The overall methodology closely parallels that developed in the earlier 2D axisymmetric jet mixing model, SCIPVIS. SCIP3D integrates the 3D parabolized Navier-Stokes (PNS) jet mixing equations, cast in mapped cartesian or cylindrical coordinates, employing the explicit MacCormack Algorithm. A pressure split variant of this algorithm is employed in subsonic regions with a sublayer approximation utilized for treating the streamwise pressure component. SCIP3D contains both the ks and kW turbulence models, and employs a two component mixture approach to treat jet exhausts of arbitrary composition. Specialized grid procedures are used to adjust the grid growth in accordance with the growth of the jet, including a hybrid cartesian/cylindrical grid procedure for rectangular jets which moves the hybrid coordinate origin towards the flow origin as the jet transitions from a rectangular to circular shape. Numerous calculations are presented for rectangular mixing problems, as well as for a variety of basic unit problems exhibiting overall capabilities of SCIP3D.

  7. On fitting generalized linear mixed-effects models for binary responses using different statistical packages.

    PubMed

    Zhang, Hui; Lu, Naiji; Feng, Changyong; Thurston, Sally W; Xia, Yinglin; Zhu, Liang; Tu, Xin M

    2011-09-10

    The generalized linear mixed-effects model (GLMM) is a popular paradigm to extend models for cross-sectional data to a longitudinal setting. When applied to modeling binary responses, different software packages and even different procedures within a package may give quite different results. In this report, we describe the statistical approaches that underlie these different procedures and discuss their strengths and weaknesses when applied to fit correlated binary responses. We then illustrate these considerations by applying these procedures implemented in some popular software packages to simulated and real study data. Our simulation results indicate a lack of reliability for most of the procedures considered, which carries significant implications for applying such popular software packages in practice. Copyright © 2011 John Wiley & Sons, Ltd.

  8. Statistical methodology for the analysis of dye-switch microarray experiments

    PubMed Central

    Mary-Huard, Tristan; Aubert, Julie; Mansouri-Attia, Nadera; Sandra, Olivier; Daudin, Jean-Jacques

    2008-01-01

    Background In individually dye-balanced microarray designs, each biological sample is hybridized on two different slides, once with Cy3 and once with Cy5. While this strategy ensures an automatic correction of the gene-specific labelling bias, it also induces dependencies between log-ratio measurements that must be taken into account in the statistical analysis. Results We present two original statistical procedures for the statistical analysis of individually balanced designs. These procedures are compared with the usual ML and REML mixed model procedures proposed in most statistical toolboxes, on both simulated and real data. Conclusion The UP procedure we propose as an alternative to usual mixed model procedures is more efficient and significantly faster to compute. This result provides some useful guidelines for the analysis of complex designs. PMID:18271965

  9. The PX-EM algorithm for fast stable fitting of Henderson's mixed model

    PubMed Central

    Foulley, Jean-Louis; Van Dyk, David A

    2000-01-01

    This paper presents procedures for implementing the PX-EM algorithm of Liu, Rubin and Wu to compute REML estimates of variance covariance components in Henderson's linear mixed models. The class of models considered encompasses several correlated random factors having the same vector length e.g., as in random regression models for longitudinal data analysis and in sire-maternal grandsire models for genetic evaluation. Numerical examples are presented to illustrate the procedures. Much better results in terms of convergence characteristics (number of iterations and time required for convergence) are obtained for PX-EM relative to the basic EM algorithm in the random regression. PMID:14736399

  10. Making a mixed-model line more efficient and flexible by introducing a bypass line

    NASA Astrophysics Data System (ADS)

    Matsuura, Sho; Matsuura, Haruki; Asada, Akiko

    2017-04-01

    This paper provides a design procedure for the bypass subline in a mixed-model assembly line. The bypass subline is installed to reduce the effect of the large difference in operation times among products assembled together in a mixed-model line. The importance of the bypass subline has been increasing in association with the rising necessity for efficiency and flexibility in modern manufacturing. The main topics of this paper are as follows: 1) the conditions in which the bypass subline effectively functions, and 2) how the load should be distributed between the main line and the bypass subline, depending on production conditions such as degree of difference in operation times among products and the mixing ratio of products. To address these issues, we analyzed the lower and the upper bounds of the line length. Based on the results, a design procedure and a numerical example are demonstrated.

  11. Analyzing longitudinal data with the linear mixed models procedure in SPSS.

    PubMed

    West, Brady T

    2009-09-01

    Many applied researchers analyzing longitudinal data share a common misconception: that specialized statistical software is necessary to fit hierarchical linear models (also known as linear mixed models [LMMs], or multilevel models) to longitudinal data sets. Although several specialized statistical software programs of high quality are available that allow researchers to fit these models to longitudinal data sets (e.g., HLM), rapid advances in general purpose statistical software packages have recently enabled analysts to fit these same models when using preferred packages that also enable other more common analyses. One of these general purpose statistical packages is SPSS, which includes a very flexible and powerful procedure for fitting LMMs to longitudinal data sets with continuous outcomes. This article aims to present readers with a practical discussion of how to analyze longitudinal data using the LMMs procedure in the SPSS statistical software package.

  12. Case mix-adjusted cost of colectomy at low-, middle-, and high-volume academic centers.

    PubMed

    Chang, Alex L; Kim, Young; Ertel, Audrey E; Hoehn, Richard S; Wima, Koffi; Abbott, Daniel E; Shah, Shimul A

    2017-05-01

    Efforts to regionalize surgery based on thresholds in procedure volume may have consequences on the cost of health care delivery. This study aims to delineate the relationship between hospital volume, case mix, and variability in the cost of operative intervention using colectomy as the model. All patients undergoing colectomy (n = 90,583) at 183 academic hospitals from 2009-2012 in The University HealthSystems Consortium Database were studied. Patient and procedure details were used to generate a case mix-adjusted predictive model of total direct costs. Observed to expected costs for each center were evaluated between centers based on overall procedure volume. Patient and procedure characteristics were significantly different between volume tertiles. Observed costs at high-volume centers were less than at middle- and low-volume centers. According to our predictive model, high-volume centers cared for a less expensive case mix than middle- and low-volume centers ($12,786 vs $13,236 and $14,497, P < .01). Our predictive model accounted for 44% of the variation in costs. Overall efficiency (standardized observed to expected costs) was greatest at high-volume centers compared to middle- and low-volume tertiles (z score -0.16 vs 0.02 and -0.07, P < .01). Hospital costs and cost efficiency after an elective colectomy varies significantly between centers and may be attributed partially to the patient differences at those centers. These data demonstrate that a significant proportion of the cost variation is due to a distinct case mix at low-volume centers, which may lead to perceived poor performance at these centers. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. A Comparison of Item Fit Statistics for Mixed IRT Models

    ERIC Educational Resources Information Center

    Chon, Kyong Hee; Lee, Won-Chan; Dunbar, Stephen B.

    2010-01-01

    In this study we examined procedures for assessing model-data fit of item response theory (IRT) models for mixed format data. The model fit indices used in this study include PARSCALE's G[superscript 2], Orlando and Thissen's S-X[superscript 2] and S-G[superscript 2], and Stone's chi[superscript 2*] and G[superscript 2*]. To investigate the…

  14. Longitudinal data analyses using linear mixed models in SPSS: concepts, procedures and illustrations.

    PubMed

    Shek, Daniel T L; Ma, Cecilia M S

    2011-01-05

    Although different methods are available for the analyses of longitudinal data, analyses based on generalized linear models (GLM) are criticized as violating the assumption of independence of observations. Alternatively, linear mixed models (LMM) are commonly used to understand changes in human behavior over time. In this paper, the basic concepts surrounding LMM (or hierarchical linear models) are outlined. Although SPSS is a statistical analyses package commonly used by researchers, documentation on LMM procedures in SPSS is not thorough or user friendly. With reference to this limitation, the related procedures for performing analyses based on LMM in SPSS are described. To demonstrate the application of LMM analyses in SPSS, findings based on six waves of data collected in the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes) in Hong Kong are presented.

  15. Longitudinal Data Analyses Using Linear Mixed Models in SPSS: Concepts, Procedures and Illustrations

    PubMed Central

    Shek, Daniel T. L.; Ma, Cecilia M. S.

    2011-01-01

    Although different methods are available for the analyses of longitudinal data, analyses based on generalized linear models (GLM) are criticized as violating the assumption of independence of observations. Alternatively, linear mixed models (LMM) are commonly used to understand changes in human behavior over time. In this paper, the basic concepts surrounding LMM (or hierarchical linear models) are outlined. Although SPSS is a statistical analyses package commonly used by researchers, documentation on LMM procedures in SPSS is not thorough or user friendly. With reference to this limitation, the related procedures for performing analyses based on LMM in SPSS are described. To demonstrate the application of LMM analyses in SPSS, findings based on six waves of data collected in the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes) in Hong Kong are presented. PMID:21218263

  16. A Generalized Deduction of the Ideal-Solution Model

    ERIC Educational Resources Information Center

    Leo, Teresa J.; Perez-del-Notario, Pedro; Raso, Miguel A.

    2006-01-01

    A new general procedure for deriving the Gibbs energy of mixing is developed through general thermodynamic considerations, and the ideal-solution model is obtained as a special particular case of the general one. The deduction of the Gibbs energy of mixing for the ideal-solution model is a rational one and viewed suitable for advanced students who…

  17. Parameter sensitivity analysis of the mixed Green-Ampt/Curve-Number method for rainfall excess estimation in small ungauged catchments

    NASA Astrophysics Data System (ADS)

    Romano, N.; Petroselli, A.; Grimaldi, S.

    2012-04-01

    With the aim of combining the practical advantages of the Soil Conservation Service - Curve Number (SCS-CN) method and Green-Ampt (GA) infiltration model, we have developed a mixed procedure, which is referred to as CN4GA (Curve Number for Green-Ampt). The basic concept is that, for a given storm, the computed SCS-CN total net rainfall amount is used to calibrate the soil hydraulic conductivity parameter of the Green-Ampt model so as to distribute in time the information provided by the SCS-CN method. In a previous contribution, the proposed mixed procedure was evaluated on 100 observed events showing encouraging results. In this study, a sensitivity analysis is carried out to further explore the feasibility of applying the CN4GA tool in small ungauged catchments. The proposed mixed procedure constrains the GA model with boundary and initial conditions so that the GA soil hydraulic parameters are expected to be insensitive toward the net hyetograph peak. To verify and evaluate this behaviour, synthetic design hyetograph and synthetic rainfall time series are selected and used in a Monte Carlo analysis. The results are encouraging and confirm that the parameter variability makes the proposed method an appropriate tool for hydrologic predictions in ungauged catchments. Keywords: SCS-CN method, Green-Ampt method, rainfall excess, ungauged basins, design hydrograph, rainfall-runoff modelling.

  18. Classification of longitudinal data through a semiparametric mixed-effects model based on lasso-type estimators.

    PubMed

    Arribas-Gil, Ana; De la Cruz, Rolando; Lebarbier, Emilie; Meza, Cristian

    2015-06-01

    We propose a classification method for longitudinal data. The Bayes classifier is classically used to determine a classification rule where the underlying density in each class needs to be well modeled and estimated. This work is motivated by a real dataset of hormone levels measured at the early stages of pregnancy that can be used to predict normal versus abnormal pregnancy outcomes. The proposed model, which is a semiparametric linear mixed-effects model (SLMM), is a particular case of the semiparametric nonlinear mixed-effects class of models (SNMM) in which finite dimensional (fixed effects and variance components) and infinite dimensional (an unknown function) parameters have to be estimated. In SNMM's maximum likelihood estimation is performed iteratively alternating parametric and nonparametric procedures. However, if one can make the assumption that the random effects and the unknown function interact in a linear way, more efficient estimation methods can be used. Our contribution is the proposal of a unified estimation procedure based on a penalized EM-type algorithm. The Expectation and Maximization steps are explicit. In this latter step, the unknown function is estimated in a nonparametric fashion using a lasso-type procedure. A simulation study and an application on real data are performed. © 2015, The International Biometric Society.

  19. Testing of a Shrouded, Short Mixing Stack Gas Eductor Model Using High Temperature Primary Flow.

    DTIC Science & Technology

    1982-10-01

    problem but of less significance than the heated surfaces of shipboard structure. Various types of electronic equipments and sensors carried by a combatant...here was to validate current procedures by comparison with previous data it was not considered essential to rein- stall these sensors or duplicate...sec) 205 tABLE XIX Mixing Stack Temperatura Data, Model B Thermocouple Axial Mixing Stack Temperature _ mbjr Posii--- .. (I IF) . Uptake 180 850 950

  20. What Do You Think You Are Measuring? A Mixed-Methods Procedure for Assessing the Content Validity of Test Items and Theory-Based Scaling

    PubMed Central

    Koller, Ingrid; Levenson, Michael R.; Glück, Judith

    2017-01-01

    The valid measurement of latent constructs is crucial for psychological research. Here, we present a mixed-methods procedure for improving the precision of construct definitions, determining the content validity of items, evaluating the representativeness of items for the target construct, generating test items, and analyzing items on a theoretical basis. To illustrate the mixed-methods content-scaling-structure (CSS) procedure, we analyze the Adult Self-Transcendence Inventory, a self-report measure of wisdom (ASTI, Levenson et al., 2005). A content-validity analysis of the ASTI items was used as the basis of psychometric analyses using multidimensional item response models (N = 1215). We found that the new procedure produced important suggestions concerning five subdimensions of the ASTI that were not identifiable using exploratory methods. The study shows that the application of the suggested procedure leads to a deeper understanding of latent constructs. It also demonstrates the advantages of theory-based item analysis. PMID:28270777

  1. 40 CFR Appendix E to Part 63 - Monitoring Procedure for Nonthoroughly Mixed Open Biological Treatment Systems at Kraft Pulp...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    .... II. Definitions Biological treatment unit = wastewater treatment unit designed and operated to... last zone in the series and ending with the first zone. B. Data Collection Requirements This method is based upon modeling the nonthoroughly mixed open biological treatment unit as a series of well-mixed...

  2. Optimizing ACS NSQIP modeling for evaluation of surgical quality and risk: patient risk adjustment, procedure mix adjustment, shrinkage adjustment, and surgical focus.

    PubMed

    Cohen, Mark E; Ko, Clifford Y; Bilimoria, Karl Y; Zhou, Lynn; Huffman, Kristopher; Wang, Xue; Liu, Yaoming; Kraemer, Kari; Meng, Xiangju; Merkow, Ryan; Chow, Warren; Matel, Brian; Richards, Karen; Hart, Amy J; Dimick, Justin B; Hall, Bruce L

    2013-08-01

    The American College of Surgeons National Surgical Quality Improvement Program (ACS NSQIP) collects detailed clinical data from participating hospitals using standardized data definitions, analyzes these data, and provides participating hospitals with reports that permit risk-adjusted comparisons with a surgical quality standard. Since its inception, the ACS NSQIP has worked to refine surgical outcomes measurements and enhance statistical methods to improve the reliability and validity of this hospital profiling. From an original focus on controlling for between-hospital differences in patient risk factors with logistic regression, ACS NSQIP has added a variable to better adjust for the complexity and risk profile of surgical procedures (procedure mix adjustment) and stabilized estimates derived from small samples by using a hierarchical model with shrinkage adjustment. New models have been developed focusing on specific surgical procedures (eg, "Procedure Targeted" models), which provide opportunities to incorporate indication and other procedure-specific variables and outcomes to improve risk adjustment. In addition, comparative benchmark reports given to participating hospitals have been expanded considerably to allow more detailed evaluations of performance. Finally, procedures have been developed to estimate surgical risk for individual patients. This article describes the development of, and justification for, these new statistical methods and reporting strategies in ACS NSQIP. Copyright © 2013 American College of Surgeons. Published by Elsevier Inc. All rights reserved.

  3. Flood analysis in mixed-urban areas reflecting interactions with the complete water cycle through coupled hydrologic-hydraulic modelling.

    PubMed

    Sto Domingo, N D; Refsgaard, A; Mark, O; Paludan, B

    2010-01-01

    The potential devastating effects of urban flooding have given high importance to thorough understanding and management of water movement within catchments, and computer modelling tools have found widespread use for this purpose. The state-of-the-art in urban flood modelling is the use of a coupled 1D pipe and 2D overland flow model to simultaneously represent pipe and surface flows. This method has been found to be accurate for highly paved areas, but inappropriate when land hydrology is important. The objectives of this study are to introduce a new urban flood modelling procedure that is able to reflect system interactions with hydrology, verify that the new procedure operates well, and underline the importance of considering the complete water cycle in urban flood analysis. A physically-based and distributed hydrological model was linked to a drainage network model for urban flood analysis, and the essential components and concepts used were described in this study. The procedure was then applied to a catchment previously modelled with the traditional 1D-2D procedure to determine if the new method performs similarly well. Then, results from applying the new method in a mixed-urban area were analyzed to determine how important hydrologic contributions are to flooding in the area.

  4. Microstructure Imaging of Crossing (MIX) White Matter Fibers from diffusion MRI

    PubMed Central

    Farooq, Hamza; Xu, Junqian; Nam, Jung Who; Keefe, Daniel F.; Yacoub, Essa; Georgiou, Tryphon; Lenglet, Christophe

    2016-01-01

    Diffusion MRI (dMRI) reveals microstructural features of the brain white matter by quantifying the anisotropic diffusion of water molecules within axonal bundles. Yet, identifying features such as axonal orientation dispersion, density, diameter, etc., in complex white matter fiber configurations (e.g. crossings) has proved challenging. Besides optimized data acquisition and advanced biophysical models, computational procedures to fit such models to the data are critical. However, these procedures have been largely overlooked by the dMRI microstructure community and new, more versatile, approaches are needed to solve complex biophysical model fitting problems. Existing methods are limited to models assuming single fiber orientation, relevant to limited brain areas like the corpus callosum, or multiple orientations but without the ability to extract detailed microstructural features. Here, we introduce a new and versatile optimization technique (MIX), which enables microstructure imaging of crossing white matter fibers. We provide a MATLAB implementation of MIX, and demonstrate its applicability to general microstructure models in fiber crossings using synthetic as well as ex-vivo and in-vivo brain data. PMID:27982056

  5. Costing complexities in mixed apheresis.

    PubMed

    Trenchard, P M

    1993-07-01

    For mixed apheresis procedures {plasma (PMA) and platelets (PLTs) as products}, six cost-accounting methods are described for apportioning the unit procedure cost ($156.02; representative example) to the two products. The methods are derived from clinical/scientific apheresis principles, but provide a wide range of unit PLT costs ($14.10, 19.71, 34.37, 42.06, 43.82 and 52.00) which relate inversely to the corresponding unit PMA costs ($73.84, 63.01, 34.36, 19.37, 15.94 and 0.00). Two of the methods appear particularly appropriate, depending upon whether the procedure is driven by PLTs predominantly or by PMA+PLTs equally. The paper encourages apheresis physicians and scientists to debate the relative attributes of the methods, develop refinements of the same, and determine through dialogue that mixed apheresis costing models properly account for the clinical science of the service provided.

  6. The computation of standard solar models

    NASA Technical Reports Server (NTRS)

    Ulrich, Roger K.; Cox, Arthur N.

    1991-01-01

    Procedures for calculating standard solar models with the usual simplifying approximations of spherical symmetry, no mixing except in the surface convection zone, no mass loss or gain during the solar lifetime, and no separation of elements by diffusion are described. The standard network of nuclear reactions among the light elements is discussed including rates, energy production and abundance changes. Several of the equation of state and opacity formulations required for the basic equations of mass, momentum and energy conservation are presented. The usual mixing-length convection theory is used for these results. Numerical procedures for calculating the solar evolution, and current evolution and oscillation frequency results for the present sun by some recent authors are given.

  7. The combined use of Green-Ampt model and Curve Number method as an empirical tool for loss estimation

    NASA Astrophysics Data System (ADS)

    Petroselli, A.; Grimaldi, S.; Romano, N.

    2012-12-01

    The Soil Conservation Service - Curve Number (SCS-CN) method is a popular rainfall-runoff model widely used to estimate losses and direct runoff from a given rainfall event, but its use is not appropriate at sub-daily time resolution. To overcome this drawback, a mixed procedure, referred to as CN4GA (Curve Number for Green-Ampt), was recently developed including the Green-Ampt (GA) infiltration model and aiming to distribute in time the information provided by the SCS-CN method. The main concept of the proposed mixed procedure is to use the initial abstraction and the total volume given by the SCS-CN to calibrate the Green-Ampt soil hydraulic conductivity parameter. The procedure is here applied on a real case study and a sensitivity analysis concerning the remaining parameters is presented; results show that CN4GA approach is an ideal candidate for the rainfall excess analysis at sub-daily time resolution, in particular for ungauged basin lacking of discharge observations.

  8. Sediment fingerprinting experiments to test the sensitivity of multivariate mixing models

    NASA Astrophysics Data System (ADS)

    Gaspar, Leticia; Blake, Will; Smith, Hugh; Navas, Ana

    2014-05-01

    Sediment fingerprinting techniques provide insight into the dynamics of sediment transfer processes and support for catchment management decisions. As questions being asked of fingerprinting datasets become increasingly complex, validation of model output and sensitivity tests are increasingly important. This study adopts an experimental approach to explore the validity and sensitivity of mixing model outputs for materials with contrasting geochemical and particle size composition. The experiments reported here focused on (i) the sensitivity of model output to different fingerprint selection procedures and (ii) the influence of source material particle size distributions on model output. Five soils with significantly different geochemistry, soil organic matter and particle size distributions were selected as experimental source materials. A total of twelve sediment mixtures were prepared in the laboratory by combining different quantified proportions of the < 63 µm fraction of the five source soils i.e. assuming no fluvial sorting of the mixture. The geochemistry of all source and mixture samples (5 source soils and 12 mixed soils) were analysed using X-ray fluorescence (XRF). Tracer properties were selected from 18 elements for which mass concentrations were found to be significantly different between sources. Sets of fingerprint properties that discriminate target sources were selected using a range of different independent statistical approaches (e.g. Kruskal-Wallis test, Discriminant Function Analysis (DFA), Principal Component Analysis (PCA), or correlation matrix). Summary results for the use of the mixing model with the different sets of fingerprint properties for the twelve mixed soils were reasonably consistent with the initial mixing percentages initially known. Given the experimental nature of the work and dry mixing of materials, geochemical conservative behavior was assumed for all elements, even for those that might be disregarded in aquatic systems (e.g. P). In general, the best fits between actual and modeled proportions were found using a set of nine tracer properties (Sr, Rb, Fe, Ti, Ca, Al, P, Si, K, Si) that were derived using DFA coupled with a multivariate stepwise algorithm, with errors between real and estimated value that did not exceed 6.7 % and values of GOF above 94.5 %. The second set of experiments aimed to explore the sensitivity of model output to variability in the particle size of source materials assuming that a degree of fluvial sorting of the resulting mixture took place. Most particle size correction procedures assume grain size affects are consistent across sources and tracer properties which is not always the case. Consequently, the < 40 µm fraction of selected soil mixtures was analysed to simulate the effect of selective fluvial transport of finer particles and the results were compared to those for source materials. Preliminary findings from this experiment demonstrate the sensitivity of the numerical mixing model outputs to different particle size distributions of source material and the variable impact of fluvial sorting on end member signatures used in mixing models. The results suggest that particle size correction procedures require careful scrutiny in the context of variable source characteristics.

  9. A GUIDE TO AERATION/CIRCULATION TECHNIQUES FOR ...

    EPA Pesticide Factsheets

    The application of aeration/circulation techniques to lakes are reviewed from a theoretical and practical viewpoint. The effect of destratification on algal production is related to the mixed depth with the use of a mathematical model. Procedures are given to determine air required to mix lakes of different sizes and shapes. It was found that approximately 30 scfm of air per 1,000,000 sq ft of lake surface area can be used. Hypolimnetic aeration systems that have been used are described in detail. Procedures for design are given.

  10. Twice random, once mixed: applying mixed models to simultaneously analyze random effects of language and participants.

    PubMed

    Janssen, Dirk P

    2012-03-01

    Psychologists, psycholinguists, and other researchers using language stimuli have been struggling for more than 30 years with the problem of how to analyze experimental data that contain two crossed random effects (items and participants). The classical analysis of variance does not apply; alternatives have been proposed but have failed to catch on, and a statistically unsatisfactory procedure of using two approximations (known as F(1) and F(2)) has become the standard. A simple and elegant solution using mixed model analysis has been available for 15 years, and recent improvements in statistical software have made mixed models analysis widely available. The aim of this article is to increase the use of mixed models by giving a concise practical introduction and by giving clear directions for undertaking the analysis in the most popular statistical packages. The article also introduces the DJMIXED: add-on package for SPSS, which makes entering the models and reporting their results as straightforward as possible.

  11. Evaluation and improvement of micro-surfacing mix design method and modelling of asphalt emulsion mastic in terms of filler-emulsion interaction

    NASA Astrophysics Data System (ADS)

    Robati, Masoud

    This Doctorate program focuses on the evaluation and improving the rutting resistance of micro-surfacing mixtures. There are many research problems related to the rutting resistance of micro-surfacing mixtures that still require further research to be solved. The main objective of this Ph.D. program is to experimentally and analytically study and improve rutting resistance of micro-surfacing mixtures. During this Ph.D. program major aspects related to the rutting resistance of micro-surfacing mixtures are investigated and presented as follow: 1) evaluation of a modification of current micro-surfacing mix design procedures: On the basis of this effort, a new mix design procedure is proposed for type III micro-surfacing mixtures as rut-fill materials on the road surface. Unlike the current mix design guidelines and specification, the new mix design is capable of selecting the optimum mix proportions for micro-surfacing mixtures; 2) evaluation of test methods and selection of aggregate grading for type III application of micro-surfacing: Within the term of this study, a new specification for selection of aggregate grading for type III application of micro-surfacing is proposed; 3) evaluation of repeatability and reproducibility of micro-surfacing mixture design tests: In this study, limits for repeatability and reproducibility of micro-surfacing mix design tests are presented; 4) a new conceptual model for filler stiffening effect on asphalt mastic of micro-surfacing: A new model is proposed, which is able to establish limits for minimum and maximum filler concentrations in the micro-surfacing mixture base on only the filler important physical and chemical properties; 5) incorporation of reclaimed asphalt pavement and post-fabrication asphalt shingles in micro-surfacing mixture: The effectiveness of newly developed mix design procedure for micro-surfacing mixtures is further validated using recycled materials. The results present the limits for the use of RAP and RAS amount in micro-surfacing mixtures; 6) new colored micro-surfacing formulations with improved durability and performance: The significant improvement of around 45% in rutting resistance of colored and conventional micro-surfacing mixtures is achieved through employing low penetration grade bitumen polymer modified asphalt emulsion stabilized using nanoparticles.

  12. Functional Mixed Effects Model for Small Area Estimation.

    PubMed

    Maiti, Tapabrata; Sinha, Samiran; Zhong, Ping-Shou

    2016-09-01

    Functional data analysis has become an important area of research due to its ability of handling high dimensional and complex data structures. However, the development is limited in the context of linear mixed effect models, and in particular, for small area estimation. The linear mixed effect models are the backbone of small area estimation. In this article, we consider area level data, and fit a varying coefficient linear mixed effect model where the varying coefficients are semi-parametrically modeled via B-splines. We propose a method of estimating the fixed effect parameters and consider prediction of random effects that can be implemented using a standard software. For measuring prediction uncertainties, we derive an analytical expression for the mean squared errors, and propose a method of estimating the mean squared errors. The procedure is illustrated via a real data example, and operating characteristics of the method are judged using finite sample simulation studies.

  13. Modeling of dough mixing profile under thermal and non thermal constraint for evalution of breadmaking quality of Hard Spring Wheat flour

    USDA-ARS?s Scientific Manuscript database

    This research was initiated to investigate the association between flour breadmaking traits and mixing characteristics and empirical dough rheological property under thermal stress. Flour samples from 30 hard spring wheat were analyzed by a mixolab standard procedure at optimum water absorptions. Mi...

  14. Proposing a Comprehensive Model for Identifying Teaching Candidates

    ERIC Educational Resources Information Center

    Bowles, Terry; Hattie, John; Dinham, Stephen; Scull, Janet; Clinton, Janet

    2014-01-01

    Teacher education in universities continues to diversify in the twenty-first century. Just as course offerings, course delivery, staffing and the teaching/research mix varies extensively from university to university so does the procedure for pre-service teacher selection. Various factors bear on selection procedures and practices however few…

  15. Comparative Robustness of Recent Methods for Analyzing Multivariate Repeated Measures Designs

    ERIC Educational Resources Information Center

    Seco, Guillermo Vallejo; Gras, Jaime Arnau; Garcia, Manuel Ato

    2007-01-01

    This study evaluated the robustness of two recent methods for analyzing multivariate repeated measures when the assumptions of covariance homogeneity and multivariate normality are violated. Specifically, the authors' work compares the performance of the modified Brown-Forsythe (MBF) procedure and the mixed-model procedure adjusted by the…

  16. Fully-coupled analysis of jet mixing problems. Part 1. Shock-capturing model, SCIPVIS

    NASA Technical Reports Server (NTRS)

    Dash, S. M.; Wolf, D. E.

    1984-01-01

    A computational model, SCIPVIS, is described which predicts the multiple cell shock structure in imperfectly expanded, turbulent, axisymmetric jets. The model spatially integrates the parabolized Navier-Stokes jet mixing equations using a shock-capturing approach in supersonic flow regions and a pressure-split approximation in subsonic flow regions. The regions are coupled using a viscous-characteristic procedure. Turbulence processes are represented via the solution of compressibility-corrected two-equation turbulence models. The formation of Mach discs in the jet and the interactive analysis of the wake-like mixing process occurring behind Mach discs is handled in a rigorous manner. Calculations are presented exhibiting the fundamental interactive processes occurring in supersonic jets and the model is assessed via comparisons with detailed laboratory data for a variety of under- and overexpanded jets.

  17. Microalgal biohydrogen production considering light energy and mixing time as the two key features for scale-up.

    PubMed

    Oncel, S; Sabankay, M

    2012-10-01

    This study focuses on a scale-up procedure considering two vital parameters light energy and mixing for microalgae cultivation, taking Chlamydomonas reinhardtii as the model microorganism. Applying two stage hydrogen production protocol to 1L flat type and 2.5L tank type photobioreactors hydrogen production was investigated with constant light energy and mixing time. The conditions that provide the shortest transfer time to anaerobic culture (light energy; 2.96 kJ s(-1)m(-3) and mixing time; 1 min) and highest hydrogen production rate (light energy; 1.22 kJ s(-1)m(-3) and mixing time; 2.5 min) are applied to 5L photobioreactor. The final hydrogen production for 5L system after 192 h was measured as 195 ± 10 mL that is comparable with the other systems is a good validation for the scale-up procedure. Copyright © 2012 Elsevier Ltd. All rights reserved.

  18. Comparison of Two Procedures for Analyzing Small Sets of Repeated Measures Data

    ERIC Educational Resources Information Center

    Vallejo, Guillermo; Livacic-Rojas, Pablo

    2005-01-01

    This article compares two methods for analyzing small sets of repeated measures data under normal and non-normal heteroscedastic conditions: a mixed model approach with the Kenward-Roger correction and a multivariate extension of the modified Brown-Forsythe (BF) test. These procedures differ in their assumptions about the covariance structure of…

  19. 40 CFR 246.201-3 - Recommended procedures: Glass, can, and mixed paper separation.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ..., and mixed paper separation. 246.201-3 Section 246.201-3 Protection of Environment ENVIRONMENTAL... and Recommended Procedures § 246.201-3 Recommended procedures: Glass, can, and mixed paper separation. In areas where markets are available, it is recommended that glass, cans, and mixed paper be...

  20. 40 CFR 246.201-3 - Recommended procedures: Glass, can, and mixed paper separation.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ..., and mixed paper separation. 246.201-3 Section 246.201-3 Protection of Environment ENVIRONMENTAL... and Recommended Procedures § 246.201-3 Recommended procedures: Glass, can, and mixed paper separation. In areas where markets are available, it is recommended that glass, cans, and mixed paper be...

  1. A TWO-STATE MIXED HIDDEN MARKOV MODEL FOR RISKY TEENAGE DRIVING BEHAVIOR

    PubMed Central

    Jackson, John C.; Albert, Paul S.; Zhang, Zhiwei

    2016-01-01

    This paper proposes a joint model for longitudinal binary and count outcomes. We apply the model to a unique longitudinal study of teen driving where risky driving behavior and the occurrence of crashes or near crashes are measured prospectively over the first 18 months of licensure. Of scientific interest is relating the two processes and predicting crash and near crash outcomes. We propose a two-state mixed hidden Markov model whereby the hidden state characterizes the mean for the joint longitudinal crash/near crash outcomes and elevated g-force events which are a proxy for risky driving. Heterogeneity is introduced in both the conditional model for the count outcomes and the hidden process using a shared random effect. An estimation procedure is presented using the forward–backward algorithm along with adaptive Gaussian quadrature to perform numerical integration. The estimation procedure readily yields hidden state probabilities as well as providing for a broad class of predictors. PMID:27766124

  2. 40 CFR 246.201-3 - Recommended procedures: Glass, can, and mixed paper separation.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... mixed paper separation. 246.201-3 Section 246.201-3 Protection of Environment ENVIRONMENTAL PROTECTION... Recommended Procedures § 246.201-3 Recommended procedures: Glass, can, and mixed paper separation. In areas where markets are available, it is recommended that glass, cans, and mixed paper be separated at the...

  3. Solubility of gases and liquids in glassy polymers.

    PubMed

    De Angelis, Maria Grazia; Sarti, Giulio C

    2011-01-01

    This review discusses a macroscopic thermodynamic procedure to calculate the solubility of gases, vapors, and liquids in glassy polymers that is based on the general procedure provided by the nonequilibrium thermodynamics for glassy polymers (NET-GP) method. Several examples are presented using various nonequilibrium (NE) models including lattice fluid (NELF), statistical associating fluid theory (NE-SAFT), and perturbed hard sphere chain (NE-PHSC). Particular applications illustrate the calculation of infinite-dilution solubility coefficients in different glassy polymers and the prediction of solubility isotherms for different gases and vapors in pure polymers as well as in polymer blends. The determination of model parameters is discussed, and the predictive abilities of the models are illustrated. Attention is also given to the solubility of gas mixtures and solubility isotherms in nanocomposite mixed matrices. The fractional free volume determined from solubility data can be used to correlate solute diffusivities in mixed matrices.

  4. 40 CFR 246.201-3 - Recommended procedures: Glass, can, and mixed paper separation.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Recommended procedures: Glass, can... and Recommended Procedures § 246.201-3 Recommended procedures: Glass, can, and mixed paper separation. In areas where markets are available, it is recommended that glass, cans, and mixed paper be...

  5. 40 CFR 246.201-3 - Recommended procedures: Glass, can, and mixed paper separation.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 25 2011-07-01 2011-07-01 false Recommended procedures: Glass, can... and Recommended Procedures § 246.201-3 Recommended procedures: Glass, can, and mixed paper separation. In areas where markets are available, it is recommended that glass, cans, and mixed paper be...

  6. Mixing Interviews and Rasch Modeling: Demonstrating a Procedure Used to Develop an Instrument That Measures Trust

    ERIC Educational Resources Information Center

    David, Shannon L.; Hitchcock, John H.; Ragan, Brian; Brooks, Gordon; Starkey, Chad

    2018-01-01

    Developing psychometrically sound instruments can be difficult, especially if little is known about the constructs of interest. When constructs of interest are unclear, a mixed methods approach can be useful. Qualitative inquiry can be used to explore a construct's meaning in a way that informs item writing and allows the strengths of one analysis…

  7. Relevance of the c-statistic when evaluating risk-adjustment models in surgery.

    PubMed

    Merkow, Ryan P; Hall, Bruce L; Cohen, Mark E; Dimick, Justin B; Wang, Edward; Chow, Warren B; Ko, Clifford Y; Bilimoria, Karl Y

    2012-05-01

    The measurement of hospital quality based on outcomes requires risk adjustment. The c-statistic is a popular tool used to judge model performance, but can be limited, particularly when evaluating specific operations in focused populations. Our objectives were to examine the interpretation and relevance of the c-statistic when used in models with increasingly similar case mix and to consider an alternative perspective on model calibration based on a graphical depiction of model fit. From the American College of Surgeons National Surgical Quality Improvement Program (2008-2009), patients were identified who underwent a general surgery procedure, and procedure groups were increasingly restricted: colorectal-all, colorectal-elective cases only, and colorectal-elective cancer cases only. Mortality and serious morbidity outcomes were evaluated using logistic regression-based risk adjustment, and model c-statistics and calibration curves were used to compare model performance. During the study period, 323,427 general, 47,605 colorectal-all, 39,860 colorectal-elective, and 21,680 colorectal cancer patients were studied. Mortality ranged from 1.0% in general surgery to 4.1% in the colorectal-all group, and serious morbidity ranged from 3.9% in general surgery to 12.4% in the colorectal-all procedural group. As case mix was restricted, c-statistics progressively declined from the general to the colorectal cancer surgery cohorts for both mortality and serious morbidity (mortality: 0.949 to 0.866; serious morbidity: 0.861 to 0.668). Calibration was evaluated graphically by examining predicted vs observed number of events over risk deciles. For both mortality and serious morbidity, there was no qualitative difference in calibration identified between the procedure groups. In the present study, we demonstrate how the c-statistic can become less informative and, in certain circumstances, can lead to incorrect model-based conclusions, as case mix is restricted and patients become more homogenous. Although it remains an important tool, caution is advised when the c-statistic is advanced as the sole measure of a model performance. Copyright © 2012 American College of Surgeons. All rights reserved.

  8. An Empirical Investigation of Methods for Assessing Item Fit for Mixed Format Tests

    ERIC Educational Resources Information Center

    Chon, Kyong Hee; Lee, Won-Chan; Ansley, Timothy N.

    2013-01-01

    Empirical information regarding performance of model-fit procedures has been a persistent need in measurement practice. Statistical procedures for evaluating item fit were applied to real test examples that consist of both dichotomously and polytomously scored items. The item fit statistics used in this study included the PARSCALE's G[squared],…

  9. STOL Traffic environment and operational procedures

    NASA Technical Reports Server (NTRS)

    Schlundt, R. W.; Dewolf, R. W.; Ausrotas, R. A.; Curry, R. E.; Demaio, D.; Keene, D. W.; Speyer, J. L.; Weinreich, M.; Zeldin, S.

    1972-01-01

    The expected traffic environment for an intercity STOL transportation system is examined, and operational procedures are discussed in order to identify problem areas which impact STOL avionics requirements. Factors considered include: traffic densities, STOL/CTOL/VTOL traffic mix, the expect ATC environment, aircraft noise models and community noise models and community noise impact, flight paths for noise abatement, wind considerations affecting landing, approach and landing considerations, STOLport site selection, runway capacity, and STOL operations at jetports, suburban airports, and separate STOLports.

  10. Real longitudinal data analysis for real people: building a good enough mixed model.

    PubMed

    Cheng, Jing; Edwards, Lloyd J; Maldonado-Molina, Mildred M; Komro, Kelli A; Muller, Keith E

    2010-02-20

    Mixed effects models have become very popular, especially for the analysis of longitudinal data. One challenge is how to build a good enough mixed effects model. In this paper, we suggest a systematic strategy for addressing this challenge and introduce easily implemented practical advice to build mixed effects models. A general discussion of the scientific strategies motivates the recommended five-step procedure for model fitting. The need to model both the mean structure (the fixed effects) and the covariance structure (the random effects and residual error) creates the fundamental flexibility and complexity. Some very practical recommendations help to conquer the complexity. Centering, scaling, and full-rank coding of all the predictor variables radically improve the chances of convergence, computing speed, and numerical accuracy. Applying computational and assumption diagnostics from univariate linear models to mixed model data greatly helps to detect and solve the related computational problems. Applying computational and assumption diagnostics from the univariate linear models to the mixed model data can radically improve the chances of convergence, computing speed, and numerical accuracy. The approach helps to fit more general covariance models, a crucial step in selecting a credible covariance model needed for defensible inference. A detailed demonstration of the recommended strategy is based on data from a published study of a randomized trial of a multicomponent intervention to prevent young adolescents' alcohol use. The discussion highlights a need for additional covariance and inference tools for mixed models. The discussion also highlights the need for improving how scientists and statisticians teach and review the process of finding a good enough mixed model. (c) 2009 John Wiley & Sons, Ltd.

  11. A Markov model for blind image separation by a mean-field EM algorithm.

    PubMed

    Tonazzini, Anna; Bedini, Luigi; Salerno, Emanuele

    2006-02-01

    This paper deals with blind separation of images from noisy linear mixtures with unknown coefficients, formulated as a Bayesian estimation problem. This is a flexible framework, where any kind of prior knowledge about the source images and the mixing matrix can be accounted for. In particular, we describe local correlation within the individual images through the use of Markov random field (MRF) image models. These are naturally suited to express the joint pdf of the sources in a factorized form, so that the statistical independence requirements of most independent component analysis approaches to blind source separation are retained. Our model also includes edge variables to preserve intensity discontinuities. MRF models have been proved to be very efficient in many visual reconstruction problems, such as blind image restoration, and allow separation and edge detection to be performed simultaneously. We propose an expectation-maximization algorithm with the mean field approximation to derive a procedure for estimating the mixing matrix, the sources, and their edge maps. We tested this procedure on both synthetic and real images, in the fully blind case (i.e., no prior information on mixing is exploited) and found that a source model accounting for local autocorrelation is able to increase robustness against noise, even space variant. Furthermore, when the model closely fits the source characteristics, independence is no longer a strict requirement, and cross-correlated sources can be separated, as well.

  12. Statistical Methodology for the Analysis of Repeated Duration Data in Behavioral Studies.

    PubMed

    Letué, Frédérique; Martinez, Marie-José; Samson, Adeline; Vilain, Anne; Vilain, Coriandre

    2018-03-15

    Repeated duration data are frequently used in behavioral studies. Classical linear or log-linear mixed models are often inadequate to analyze such data, because they usually consist of nonnegative and skew-distributed variables. Therefore, we recommend use of a statistical methodology specific to duration data. We propose a methodology based on Cox mixed models and written under the R language. This semiparametric model is indeed flexible enough to fit duration data. To compare log-linear and Cox mixed models in terms of goodness-of-fit on real data sets, we also provide a procedure based on simulations and quantile-quantile plots. We present two examples from a data set of speech and gesture interactions, which illustrate the limitations of linear and log-linear mixed models, as compared to Cox models. The linear models are not validated on our data, whereas Cox models are. Moreover, in the second example, the Cox model exhibits a significant effect that the linear model does not. We provide methods to select the best-fitting models for repeated duration data and to compare statistical methodologies. In this study, we show that Cox models are best suited to the analysis of our data set.

  13. A rational approach to the use of Prandtl's mixing length model in free turbulent shear flow calculations

    NASA Technical Reports Server (NTRS)

    Rudy, D. H.; Bushnell, D. M.

    1973-01-01

    Prandtl's basic mixing length model was used to compute 22 test cases on free turbulent shear flows. The calculations employed appropriate algebraic length scale equations and single values of mixing length constant for planar and axisymmetric flows, respectively. Good agreement with data was obtained except for flows, such as supersonic free shear layers, where large sustained sensitivity changes occur. The inability to predict the more gradual mixing in these flows is tentatively ascribed to the presence of a significant turbulence-induced transverse static pressure gradient which is neglected in conventional solution procedures. Some type of an equation for length scale development was found to be necessary for successful computation of highly nonsimilar flow regions such as jet or wake development from thick wall flows.

  14. Mixed-order phase transition in a minimal, diffusion-based spin model.

    PubMed

    Fronczak, Agata; Fronczak, Piotr

    2016-07-01

    In this paper we exactly solve, within the grand canonical ensemble, a minimal spin model with the hybrid phase transition. We call the model diffusion based because its Hamiltonian can be recovered from a simple dynamic procedure, which can be seen as an equilibrium statistical mechanics representation of a biased random walk. We outline the derivation of the phase diagram of the model, in which the triple point has the hallmarks of the hybrid transition: discontinuity in the average magnetization and algebraically diverging susceptibilities. At this point, two second-order transition curves meet in equilibrium with the first-order curve, resulting in a prototypical mixed-order behavior.

  15. Entropy of mixing calculations for compound forming liquid alloys in the hard sphere system

    NASA Astrophysics Data System (ADS)

    Singh, P.; Khanna, K. N.

    1984-06-01

    It is shown that the semi-empirical model proposed in a previous paper for the evaluation of the entropy of mixing of simple liquid metals alloys leads to accurate results for compound forming liquid alloys. The procedure is similar to that described for a regular solution. Numerical applications are made to NaGa, KPb and KT1 alloys.

  16. Investigation of gaseous propellant combustion and associated injector/chamber design guidelines

    NASA Technical Reports Server (NTRS)

    Calhoon, D. F.; Ito, J. I.; Kors, D. L.

    1973-01-01

    Injector design criteria are provided for gaseous hydrogen-gaseous oxygen propellants. Design equations and procedures are presented which will allow an injector-chamber designer to a priori estimate of the performance, compatibility and stability characteristics of prototype injectors. The effects of chamber length, element geometry, thrust per element, mixture ratio, impingement angle, and element spacing were evaluated for four element concepts and their derivatives. The data from this series of tests were reduced to a single valued mixing function that describes the mixing potential of the various elements. Performance, heat transfer and stability data were generated for various mixture ratios, propellant temperatures, chamber pressures, contraction ratios, and chamber lengths. Applications of the models resulted in the design of procedures, whereby the performance and chamber heat flux can be calculated directly, and the injector stability estimated in conjunction with existing models.

  17. Onset of turbulence in accelerated high-Reynolds-number flow

    NASA Astrophysics Data System (ADS)

    Zhou, Ye; Robey, Harry F.; Buckingham, Alfred C.

    2003-05-01

    A new criterion, flow drive time, is identified here as a necessary condition for transition to turbulence in accelerated, unsteady flows. Compressible, high-Reynolds-number flows initiated, for example, in shock tubes, supersonic wind tunnels with practical limitations on dimensions or reservoir capacity, and high energy density pulsed laser target vaporization experimental facilities may not provide flow duration adequate for turbulence development. In addition, for critical periods of the overall flow development, the driving background flow is often unsteady in the experiments as well as in the physical flow situations they are designed to mimic. In these situations transition to fully developed turbulence may not be realized despite achievement of flow Reynolds numbers associated with or exceeding stationary flow transitional criteria. Basically our transitional criterion and prediction procedure extends to accelerated, unsteady background flow situations the remarkably universal mixing transition criterion proposed by Dimotakis [P. E. Dimotakis, J. Fluid Mech. 409, 69 (2000)] for stationary flows. This provides a basis for the requisite space and time scaling. The emphasis here is placed on variable density flow instabilities initiated by constant acceleration Rayleigh-Taylor instability (RTI) or impulsive (shock) acceleration Richtmyer-Meshkov instability (RMI) or combinations of both. The significant influences of compressibility on these developing transitional flows are discussed with their implications on the procedural model development. A fresh perspective for predictive modeling and design of experiments for the instability growth and turbulent mixing transitional interval is provided using an analogy between the well-established buoyancy-drag model with applications of a hierarchy of single point turbulent transport closure models. Experimental comparisons with the procedural results are presented where use is made of three distinctly different types of acceleration driven instability experiments: (1) classical, relatively low speed, constant acceleration RTI experiments; (2) shock tube, shockwave driven RMI flow mixing experiments; (3) laser target vaporization RTI and RMI mixing experiments driven at very high energy density. These last named experiments are of special interest as they provide scaleable flow conditions simulating those of astrophysical magnitude such as shock-driven hydrodynamic mixing in supernova evolution research.

  18. Experiment Analysis and Modelling of Compaction Behaviour of Ag60Cu30Sn10 Mixed Metal Powders

    NASA Astrophysics Data System (ADS)

    Zhou, Mengcheng; Huang, Shangyu; Liu, Wei; Lei, Yu; Yan, Shiwei

    2018-03-01

    A novel process method combines powder compaction and sintering was employed to fabricate thin sheets of cadmium-free silver based filler metals, the compaction densification behaviour of Ag60Cu30Sn10 mixed metal powders was investigated experimentally. Based on the equivalent density method, the density-dependent Drucker-Prager Cap (DPC) model was introduced to model the powder compaction behaviour. Various experiment procedures were completed to determine the model parameters. The friction coefficients in lubricated and unlubricated die were experimentally determined. The determined material parameters were validated by experiments and numerical simulation of powder compaction process using a user subroutine (USDFLD) in ABAQUS/Standard. The good agreement between the simulated and experimental results indicates that the determined model parameters are able to describe the compaction behaviour of the multicomponent mixed metal powders, which can be further used for process optimization simulations.

  19. A Hybrid Numerical Method for Turbulent Mixing Layers. Degree awarded by Case Western Reserve Univ.

    NASA Technical Reports Server (NTRS)

    Georgiadis, Nicholas J.

    2001-01-01

    A hybrid method has been developed for simulations of compressible turbulent mixing layers. Such mixing layers dominate the flows in exhaust systems of modern day aircraft and also those of hypersonic vehicles currently under development. The method configurations in which a dominant structural feature provides an unsteady mechanism to drive the turbulent development in the mixing layer. The hybrid method uses a Reynolds-averaged Navier-Stokes (RANS) procedure to calculate wall bounded regions entering a mixing section, and a Large Eddy Simulation (LES) procedure to calculate the mixing dominated regions. A numerical technique was developed to enable the use of the hybrid RANS-LES method on stretched, non-Cartesian grids. Closure for the RANS equations was obtained using the Cebeci-Smith algebraic turbulence model in conjunction with the wall-function approach of Ota and Goldberg. The wall-function approach enabled a continuous computational grid from the RANS regions to the LES region. The LES equations were closed using the Smagorinsky subgrid scale model. The hybrid RANS-LES method is applied to a benchmark compressible mixing layer experiment. Preliminary two dimensional calculations are used to investigate the effects of axial grid density and boundary conditions. Vortex shedding from the base region of a splitter plate separating the upstream flows was observed to eventually transition to turbulence. The location of the transition, however, was much further downstream than indicated by experiments. Actual LES calculations, performed in three spatial directions, also indicated vortex shedding, but the transition to turbulence was found to occur much closer to the beginning of the mixing section. which is in agreement with experimental observations. These calculations demonstrated that LES simulations must be performed in three dimensions. Comparisons of time-averaged axial velocities and turbulence intensities indicated reasonable agreement with experimental data.

  20. The use of "mixing" procedure of mixed methods in health services research.

    PubMed

    Zhang, Wanqing; Creswell, John

    2013-08-01

    Mixed methods research has emerged alongside qualitative and quantitative approaches as an important tool for health services researchers. Despite growing interest, among health services researchers, in using mixed methods designs, little has been done to identify the procedural aspects of doing so. To describe how mixed methods researchers mix the qualitative and quantitative aspects of their studies in health services research. We searched the PubMed for articles, using mixed methods in health services research, published between January 1, 2006 and December 30, 2010. We identified and reviewed 30 published health services research articles on studies in which mixed methods had been used. We selected 3 articles as illustrations to help health services researcher conceptualize the type of mixing procedures that they were using. Three main "mixing" procedures have been applied within these studies: (1) the researchers analyzed the 2 types of data at the same time but separately and integrated the results during interpretation; (2) the researchers connected the qualitative and quantitative portions in phases in such a way that 1 approach was built upon the findings of the other approach; and (3) the researchers mixed the 2 data types by embedding the analysis of 1 data type within the other. "Mixing" in mixed methods is more than just the combination of 2 independent components of the quantitative and qualitative data. The use of "mixing" procedure in health services research involves the integration, connection, and embedding of these 2 data components.

  1. An update on modeling dose-response relationships: Accounting for correlated data structure and heterogeneous error variance in linear and nonlinear mixed models.

    PubMed

    Gonçalves, M A D; Bello, N M; Dritz, S S; Tokach, M D; DeRouchey, J M; Woodworth, J C; Goodband, R D

    2016-05-01

    Advanced methods for dose-response assessments are used to estimate the minimum concentrations of a nutrient that maximizes a given outcome of interest, thereby determining nutritional requirements for optimal performance. Contrary to standard modeling assumptions, experimental data often present a design structure that includes correlations between observations (i.e., blocking, nesting, etc.) as well as heterogeneity of error variances; either can mislead inference if disregarded. Our objective is to demonstrate practical implementation of linear and nonlinear mixed models for dose-response relationships accounting for correlated data structure and heterogeneous error variances. To illustrate, we modeled data from a randomized complete block design study to evaluate the standardized ileal digestible (SID) Trp:Lys ratio dose-response on G:F of nursery pigs. A base linear mixed model was fitted to explore the functional form of G:F relative to Trp:Lys ratios and assess model assumptions. Next, we fitted 3 competing dose-response mixed models to G:F, namely a quadratic polynomial (QP) model, a broken-line linear (BLL) ascending model, and a broken-line quadratic (BLQ) ascending model, all of which included heteroskedastic specifications, as dictated by the base model. The GLIMMIX procedure of SAS (version 9.4) was used to fit the base and QP models and the NLMIXED procedure was used to fit the BLL and BLQ models. We further illustrated the use of a grid search of initial parameter values to facilitate convergence and parameter estimation in nonlinear mixed models. Fit between competing dose-response models was compared using a maximum likelihood-based Bayesian information criterion (BIC). The QP, BLL, and BLQ models fitted on G:F of nursery pigs yielded BIC values of 353.7, 343.4, and 345.2, respectively, thus indicating a better fit of the BLL model. The BLL breakpoint estimate of the SID Trp:Lys ratio was 16.5% (95% confidence interval [16.1, 17.0]). Problems with the estimation process rendered results from the BLQ model questionable. Importantly, accounting for heterogeneous variance enhanced inferential precision as the breadth of the confidence interval for the mean breakpoint decreased by approximately 44%. In summary, the article illustrates the use of linear and nonlinear mixed models for dose-response relationships accounting for heterogeneous residual variances, discusses important diagnostics and their implications for inference, and provides practical recommendations for computational troubleshooting.

  2. Soil mixing of stratified contaminated sands.

    PubMed

    Al-Tabba, A; Ayotamuno, M J; Martin, R J

    2000-02-01

    Validation of soil mixing for the treatment of contaminated ground is needed in a wide range of site conditions to widen the application of the technology and to understand the mechanisms involved. Since very limited work has been carried out in heterogeneous ground conditions, this paper investigates the effectiveness of soil mixing in stratified sands using laboratory-scale augers. This enabled a low cost investigation of factors such as grout type and form, auger design, installation procedure, mixing mode, curing period, thickness of soil layers and natural moisture content on the unconfined compressive strength, leachability and leachate pH of the soil-grout mixes. The results showed that the auger design plays a very important part in the mixing process in heterogeneous sands. The variability of the properties measured in the stratified soils and the measurable variations caused by the various factors considered, highlighted the importance of duplicating appropriate in situ conditions, the usefulness of laboratory-scale modelling of in situ conditions and the importance of modelling soil and contaminant heterogeneities at the treatability study stage.

  3. Biology Notes

    ERIC Educational Resources Information Center

    School Science Review, 1977

    1977-01-01

    Includes procedures for demonstrating anaerobic respiration in peas, isolating virgin Drosophila females, solving mortality problems in young gerbils, measuring dissolved oxygen, constructing models for transpiration and DNA molecules, freezing chick embryos, mixing nutrient media, illustrating Darwinian ecological principles, and detecting…

  4. Detecting Aberrant Response Patterns in the Rasch Model. Rapport 87-3.

    ERIC Educational Resources Information Center

    Kogut, Jan

    In this paper, the detection of response patterns aberrant from the Rasch model is considered. For this purpose, a new person fit index, recently developed by I. W. Molenaar (1987) and an iterative estimation procedure are used in a simulation study of Rasch model data mixed with aberrant data. Three kinds of aberrant response behavior are…

  5. Alternative Models for Small Samples in Psychological Research: Applying Linear Mixed Effects Models and Generalized Estimating Equations to Repeated Measures Data

    ERIC Educational Resources Information Center

    Muth, Chelsea; Bales, Karen L.; Hinde, Katie; Maninger, Nicole; Mendoza, Sally P.; Ferrer, Emilio

    2016-01-01

    Unavoidable sample size issues beset psychological research that involves scarce populations or costly laboratory procedures. When incorporating longitudinal designs these samples are further reduced by traditional modeling techniques, which perform listwise deletion for any instance of missing data. Moreover, these techniques are limited in their…

  6. A Comparison of Declarative and Hybrid Declarative-Procedural Models for Rover Operations

    NASA Technical Reports Server (NTRS)

    Knight, Russell; Rabideau, Gregg; Lenda, Matthew; Maldague, Pierre

    2012-01-01

    The MAPGEN [2] (Mixed-initiative Activity Plan GENerator) planning system is a great example of a hybrid procedural/declarative system where the advantages of each are leveraged to produce an effective planner/scheduler for Mars Exploration Rover tactical planning. We explore the adaptation of the same domain to an entirely declarative planning system (ASPEN [4] Activity Scheduling and Planning ENvironment), and demonstrate that, with some translation, much of the procedural knowledge encoding is amenable to a declarative knowledge encoding.

  7. New generation mix-designs : laboratory-field testing and modifications to Texas HMA mix-design procedures.

    DOT National Transportation Integrated Search

    2012-10-01

    Recent changes to the Texas hot mix asphalt (HMA) mix-design procedures such as adaption of the higher-stiffer PG asphalt-binder grades and the Hamburg test have ensured that the mixes that are routinely used on the Texas highways are not prone to ru...

  8. Subgrid-scale scalar flux modelling based on optimal estimation theory and machine-learning procedures

    NASA Astrophysics Data System (ADS)

    Vollant, A.; Balarac, G.; Corre, C.

    2017-09-01

    New procedures are explored for the development of models in the context of large eddy simulation (LES) of a passive scalar. They rely on the combination of the optimal estimator theory with machine-learning algorithms. The concept of optimal estimator allows to identify the most accurate set of parameters to be used when deriving a model. The model itself can then be defined by training an artificial neural network (ANN) on a database derived from the filtering of direct numerical simulation (DNS) results. This procedure leads to a subgrid scale model displaying good structural performance, which allows to perform LESs very close to the filtered DNS results. However, this first procedure does not control the functional performance so that the model can fail when the flow configuration differs from the training database. Another procedure is then proposed, where the model functional form is imposed and the ANN used only to define the model coefficients. The training step is a bi-objective optimisation in order to control both structural and functional performances. The model derived from this second procedure proves to be more robust. It also provides stable LESs for a turbulent plane jet flow configuration very far from the training database but over-estimates the mixing process in that case.

  9. Estimating proportions in petrographic mixing equations by least-squares approximation.

    PubMed

    Bryan, W B; Finger, L W; Chayes, F

    1969-02-28

    Petrogenetic hypotheses involving fractional crystallization, assimilation, or mixing of magmas may be expressed and tested as problems in leastsquares approximation. The calculation uses all of the data and yields a unique solution for each model, thus avoiding the ambiguity inherent in graphical or trial-and-error procedures. The compositional change in the 1960 lavas of Kilauea Volcano, Hawaii, is used to illustrate the method of calculation.

  10. Non-linear Growth Models in Mplus and SAS

    PubMed Central

    Grimm, Kevin J.; Ram, Nilam

    2013-01-01

    Non-linear growth curves or growth curves that follow a specified non-linear function in time enable researchers to model complex developmental patterns with parameters that are easily interpretable. In this paper we describe how a variety of sigmoid curves can be fit using the Mplus structural modeling program and the non-linear mixed-effects modeling procedure NLMIXED in SAS. Using longitudinal achievement data collected as part of a study examining the effects of preschool instruction on academic gain we illustrate the procedures for fitting growth models of logistic, Gompertz, and Richards functions. Brief notes regarding the practical benefits, limitations, and choices faced in the fitting and estimation of such models are included. PMID:23882134

  11. Proceedings of the tenth annual DOE low-level waste management conference: Session 2: Site performance assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1988-12-01

    This document contains twelve papers on various aspects of low-level radioactive waste management. Topics of this volume include: performance assessment methodology; remedial action alternatives; site selection and site characterization procedures; intruder scenarios; sensitivity analysis procedures; mathematical models for mixed waste environmental transport; and risk assessment methodology. Individual papers were processed separately for the database. (TEM)

  12. Estimating prefledging survival: Allowing for brood mixing and dependence among brood mates

    USGS Publications Warehouse

    Flint, Paul L.; Pollock, Kenneth H.; Thomas, Dana; Sedinger, James S.

    1995-01-01

    Estimates of juvenile survival from hatch to fledging provide important information on waterfowl productivity. We develop a model for estimating survival of young waterfowl from hatch to fledging. Our model enables interchange of individuals among broods and relaxes the assumption that individuals within broods have independent survival probabilities. The model requires repeated observations of individually identifiable adults and their offspring that are not individually identifiable. A modified Kaplan-Meier procedure (Pollock et al. 1989a,b) and a modified Mayfield procedure (Mayfield 1961, 1975; Johnson 1979) can be used under this general modeling framework, and survival rates and corresponding variances of the point estimators can be determined.

  13. Using a tracer technique to identify the extent of non-ideal flows in the continuous mixing of non-Newtonian fluids

    NASA Astrophysics Data System (ADS)

    Patel, D.; Ein-Mozaffari, F.; Mehrvar, M.

    2013-05-01

    The identification of non-ideal flows in a continuous-flow mixing of non-Newtonian fluids is a challenging task for various chemical industries: plastic manufacturing, water and wastewater treatment, and pulp and paper manufacturing. Non-ideal flows such as channelling, recirculation, and dead zones significantly affect the performance of continuous-flow mixing systems. Therefore, the main objective of this paper was to develop an identification protocol to measure non-ideal flows in the continuous-flow mixing system. The extent of non-ideal flows was quantified using a dynamic model that incorporated channelling, recirculation, and dead volume in the mixing vessel. To estimate the dynamic model parameters, the system was excited using a frequency-modulated random binary input by injecting the saline solution (as a tracer) into the fresh feed stream prior to being pumped into the mixing vessel. The injection of the tracer was controlled by a computer-controlled on-off solenoid valve. Using the trace technique, the extent of channelling and the effective mixed volume were successfully determined and used as mixing quality criteria. Such identification procedures can be applied at various areas of chemical engineering in order to improve the mixing quality.

  14. Hierarchical Bayes approach for subgroup analysis.

    PubMed

    Hsu, Yu-Yi; Zalkikar, Jyoti; Tiwari, Ram C

    2017-01-01

    In clinical data analysis, both treatment effect estimation and consistency assessment are important for a better understanding of the drug efficacy for the benefit of subjects in individual subgroups. The linear mixed-effects model has been used for subgroup analysis to describe treatment differences among subgroups with great flexibility. The hierarchical Bayes approach has been applied to linear mixed-effects model to derive the posterior distributions of overall and subgroup treatment effects. In this article, we discuss the prior selection for variance components in hierarchical Bayes, estimation and decision making of the overall treatment effect, as well as consistency assessment of the treatment effects across the subgroups based on the posterior predictive p-value. Decision procedures are suggested using either the posterior probability or the Bayes factor. These decision procedures and their properties are illustrated using a simulated example with normally distributed response and repeated measurements.

  15. Empirical Behavioral Models to Support Alternative Tools for the Analysis of Mixed-Priority Pedestrian-Vehicle Interaction in a Highway Capacity Context

    PubMed Central

    Rouphail, Nagui M.

    2011-01-01

    This paper presents behavioral-based models for describing pedestrian gap acceptance at unsignalized crosswalks in a mixed-priority environment, where some drivers yield and some pedestrians cross in gaps. Logistic regression models are developed to predict the probability of pedestrian crossings as a function of vehicle dynamics, pedestrian assertiveness, and other factors. In combination with prior work on probabilistic yielding models, the results can be incorporated in a simulation environment, where they can more fully describe the interaction of these two modes. The approach is intended to supplement HCM analytical procedure for locations where significant interaction occurs between drivers and pedestrians, including modern roundabouts. PMID:21643488

  16. Effect of various pretreatment methods on anaerobic mixed microflora to enhance biohydrogen production utilizing dairy wastewater as substrate.

    PubMed

    Venkata Mohan, S; Lalit Babu, V; Sarma, P N

    2008-01-01

    Influence of different pretreatment methods applied on anaerobic mixed inoculum was evaluated for selectively enriching the hydrogen (H(2)) producing mixed culture using dairy wastewater as substrate. The experimental data showed the feasibility of molecular biohydrogen generation utilizing dairy wastewater as primary carbon source through metabolic participation. However, the efficiency of H(2) evolution and substrate removal efficiency were found to be dependent on the type of pretreatment procedure adopted on the parent inoculum. Among the studied pretreatment methods, chemical pretreatment (2-bromoethane sulphonic acid sodium salt (0.2 g/l); 24 h) procedure enabled higher H(2) yield along with concurrent substrate removal efficiency. On the contrary, heat-shock pretreatment (100 degrees C; 1 h) procedure resulted in relatively low H(2) yield. Compared to control experiments all the adopted pretreatment methods documented higher H(2) generation efficiency. In the case of combination experiments, integration of pH (pH 3; adjusted with ortho-phosphoric acid; 24 h) and chemical pretreatment evidenced higher H(2) production. Data envelopment analysis (DEA), a frontier analysis technique model was successfully applied to enumerate the relative efficiency of different pretreatment methods studied by considered pretreatment procedures as input and cumulative H(2) production rate and substrate degradation rate as corresponding two outputs.

  17. Separation of pedogenic and lithogenic components of magnetic susceptibility in the Chinese loess/palaeosol sequence as determined by the CBD procedure and a mixing analysis

    NASA Astrophysics Data System (ADS)

    Vidic, Nataša. J.; TenPas, Jeff D.; Verosub, Kenneth L.; Singer, Michael J.

    2000-08-01

    Magnetic susceptibility variations in the Chinese loess/palaeosol sequences have been used extensively for palaeoclimatic interpretations. The magnetic signal of these sequences must be divided into lithogenic and pedogenic components because the palaeoclimatic record is primarily reflected in the pedogenic component. In this paper we compare two methods for separating the pedogenic and lithogenic components of the magnetic susceptibility signal: the citrate-bicarbonate-dithionite (CBD) extraction procedure, and a mixing analysis. Both methods yield good estimates of the pedogenic component, especially for the palaeosols. The CBD procedure underestimates the lithogenic component and overestimates the pedogenic component. The magnitude of this effect is moderately high in loess layers but almost negligible in palaeosols. The mixing model overestimates the lithogenic component and underestimates the pedogenic component. Both methods can be adjusted to yield better estimates of both components. The lithogenic susceptibility, as determined by either method, suggests that palaeoclimatic interpretations based only on total susceptibility will be in error and that a single estimate of the average lithogenic susceptibility is not an accurate basis for adjusting the total susceptibility. A long-term decline in lithogenic susceptibility with depth in the section suggests more intense or prolonged periods of weathering associated with the formation of the older palaeosols. The CBD procedure provides the most comprehensive information on the magnitude of the components and magnetic mineralogy of loess and palaeosols. However, the mixing analysis provides a sensitive, rapid, and easily applied alternative to the CBD procedure. A combination of the two approaches provides the most powerful and perhaps the most accurate way of separating the magnetic susceptibility components.

  18. Modelling lactation curve for milk fat to protein ratio in Iranian buffaloes (Bubalus bubalis) using non-linear mixed models.

    PubMed

    Hossein-Zadeh, Navid Ghavi

    2016-08-01

    The aim of this study was to compare seven non-linear mathematical models (Brody, Wood, Dhanoa, Sikka, Nelder, Rook and Dijkstra) to examine their efficiency in describing the lactation curves for milk fat to protein ratio (FPR) in Iranian buffaloes. Data were 43 818 test-day records for FPR from the first three lactations of Iranian buffaloes which were collected on 523 dairy herds in the period from 1996 to 2012 by the Animal Breeding Center of Iran. Each model was fitted to monthly FPR records of buffaloes using the non-linear mixed model procedure (PROC NLMIXED) in SAS and the parameters were estimated. The models were tested for goodness of fit using Akaike's information criterion (AIC), Bayesian information criterion (BIC) and log maximum likelihood (-2 Log L). The Nelder and Sikka mixed models provided the best fit of lactation curve for FPR in the first and second lactations of Iranian buffaloes, respectively. However, Wood, Dhanoa and Sikka mixed models provided the best fit of lactation curve for FPR in the third parity buffaloes. Evaluation of first, second and third lactation features showed that all models, except for Dijkstra model in the third lactation, under-predicted test time at which daily FPR was minimum. On the other hand, minimum FPR was over-predicted by all equations. Evaluation of the different models used in this study indicated that non-linear mixed models were sufficient for fitting test-day FPR records of Iranian buffaloes.

  19. Procedural Portfolio Planning in Plastic Surgery, Part 2: Collaboration Between Surgeons and Hospital Administrators to Develop a Funds Flow Model for Procedures Performed at an Academic Medical Center.

    PubMed

    Hultman, Charles Scott

    2016-06-01

    Although plastic surgeons make important contributions to the clinical, educational, and research missions of academic medical centers (AMCs), determining the financial value of a plastic surgery service can be difficult, due to complex cost accounting systems. We analyzed the financial impact of plastic surgery on an AMC, by examining the contribution margins and operating income of surgical procedures. We collaborated with hospital administrators to implement 3 types of strategic changes: (1) growth of areas with high contribution margin, (2) curtailment of high-risk procedures with negative contribution margin, (3) improved efficiency of mission-critical services with high resource consumption. Outcome measures included: facility charges, hospital collections, contribution margin, operating margin, and operating room times. We also studied the top 50 Current Procedural Terminology codes (total case number × charge/case), ranking procedures for profitability, as determined by operating margin. During the 2-year study period, we had no turnover in faculty; did not pursue any formal marketing; did not change our surgical fees, billing system, or payer mix; and maintained our commitment to indigent care. After rebalancing our case mix, through procedural portfolio planning, average hospital operating income/procedure increased from $-79 to $+816. Volume and diversity of cases increased, with no change in payer mix. Although charges/case decreased, both contribution margin and operating margin increased, due to improved throughput and decreased operating room times. The 5 most profitable procedures for the hospital were hernia repair, mandibular osteotomy, hand skin graft, free fibula flap, and head and neck flap, whereas the 5 least profitable were latissimus breast reconstruction, craniosynostosis repair, free-flap breast reconstruction, trunk skin graft, and cutaneous free flap. Total operating income for the hospital, from plastic surgery procedures, increased from $-115,103 to $+1,277,040, of which $350,000 (25%) was returned to the practice plan as enterprise funds to support program development. Through focused strategic initiatives, plastic surgeons and hospital administrators can work together to unlock the latent value of a plastic surgery service to an AMC. Specific financial benefits to the hospital include increased contribution margin and operating income, the latter of which can be reinvested in the plastic surgery service through a gain-sharing model.

  20. Calculation of three-dimensional compressible laminar and turbulent boundary layers. An implicit finite-difference procedure for solving the three-dimensional compressible laminar, transitional, and turbulent boundary-layer equations

    NASA Technical Reports Server (NTRS)

    Harris, J. E.

    1975-01-01

    An implicit finite-difference procedure is presented for solving the compressible three-dimensional boundary-layer equations. The method is second-order accurate, unconditionally stable (conditional stability for reverse cross flow), and efficient from the viewpoint of computer storage and processing time. The Reynolds stress terms are modeled by (1) a single-layer mixing length model and (2) a two-layer eddy viscosity model. These models, although simple in concept, accurately predicted the equilibrium turbulent flow for the conditions considered. Numerical results are compared with experimental wall and profile data for a cone at an angle of attack larger than the cone semiapex angle. These comparisons clearly indicate that the numerical procedure and turbulence models accurately predict the experimental data with as few as 21 nodal points in the plane normal to the wall boundary.

  1. The Behavioral Economics of Choice and Interval Timing

    PubMed Central

    Jozefowiez, J.; Staddon, J. E. R.; Cerutti, D. T.

    2009-01-01

    We propose a simple behavioral economic model (BEM) describing how reinforcement and interval timing interact. The model assumes a Weber-law-compliant logarithmic representation of time. Associated with each represented time value are the payoffs that have been obtained for each possible response. At a given real time, the response with the highest payoff is emitted. The model accounts for a wide range of data from procedures such as simple bisection, metacognition in animals, economic effects in free-operant psychophysical procedures and paradoxical choice in double-bisection procedures. Although it assumes logarithmic time representation, it can also account for data from the time-left procedure usually cited in support of linear time representation. It encounters some difficulties in complex free-operant choice procedures, such as concurrent mixed fixed-interval schedules as well as some of the data on double bisection, that may involve additional processes. Overall, BEM provides a theoretical framework for understanding how reinforcement and interval timing work together to determine choice between temporally differentiated reinforcers. PMID:19618985

  2. Extending the Service Life of Pavements

    NASA Astrophysics Data System (ADS)

    Gschwendt, Ivan

    2018-03-01

    The cost of road construction and expenditures on the maintenance of pavements, i.e., their whole life cost, represents a lot of money. The paper describes a procedure for a pavement management system with degradation models and estimates the length of time for the rehabilitation of an asphalt pavement. Using a theory of pavement mechanics, we calculated the stresses and strains on the layers of two pavement models. High modulus asphalt concrete, an asphalt mix with a high binder content, and an asphalt mix with binder modifications are new road building materials. Prolonging the time for the rehabilitation of pavements is possible.

  3. A Comparison of Two-Stage Approaches for Fitting Nonlinear Ordinary Differential Equation (ODE) Models with Mixed Effects

    PubMed Central

    Chow, Sy-Miin; Bendezú, Jason J.; Cole, Pamela M.; Ram, Nilam

    2016-01-01

    Several approaches currently exist for estimating the derivatives of observed data for model exploration purposes, including functional data analysis (FDA), generalized local linear approximation (GLLA), and generalized orthogonal local derivative approximation (GOLD). These derivative estimation procedures can be used in a two-stage process to fit mixed effects ordinary differential equation (ODE) models. While the performance and utility of these routines for estimating linear ODEs have been established, they have not yet been evaluated in the context of nonlinear ODEs with mixed effects. We compared properties of the GLLA and GOLD to an FDA-based two-stage approach denoted herein as functional ordinary differential equation with mixed effects (FODEmixed) in a Monte Carlo study using a nonlinear coupled oscillators model with mixed effects. Simulation results showed that overall, the FODEmixed outperformed both the GLLA and GOLD across all the embedding dimensions considered, but a novel use of a fourth-order GLLA approach combined with very high embedding dimensions yielded estimation results that almost paralleled those from the FODEmixed. We discuss the strengths and limitations of each approach and demonstrate how output from each stage of FODEmixed may be used to inform empirical modeling of young children’s self-regulation. PMID:27391255

  4. A Comparison of Two-Stage Approaches for Fitting Nonlinear Ordinary Differential Equation Models with Mixed Effects.

    PubMed

    Chow, Sy-Miin; Bendezú, Jason J; Cole, Pamela M; Ram, Nilam

    2016-01-01

    Several approaches exist for estimating the derivatives of observed data for model exploration purposes, including functional data analysis (FDA; Ramsay & Silverman, 2005 ), generalized local linear approximation (GLLA; Boker, Deboeck, Edler, & Peel, 2010 ), and generalized orthogonal local derivative approximation (GOLD; Deboeck, 2010 ). These derivative estimation procedures can be used in a two-stage process to fit mixed effects ordinary differential equation (ODE) models. While the performance and utility of these routines for estimating linear ODEs have been established, they have not yet been evaluated in the context of nonlinear ODEs with mixed effects. We compared properties of the GLLA and GOLD to an FDA-based two-stage approach denoted herein as functional ordinary differential equation with mixed effects (FODEmixed) in a Monte Carlo (MC) study using a nonlinear coupled oscillators model with mixed effects. Simulation results showed that overall, the FODEmixed outperformed both the GLLA and GOLD across all the embedding dimensions considered, but a novel use of a fourth-order GLLA approach combined with very high embedding dimensions yielded estimation results that almost paralleled those from the FODEmixed. We discuss the strengths and limitations of each approach and demonstrate how output from each stage of FODEmixed may be used to inform empirical modeling of young children's self-regulation.

  5. Deletion Diagnostics for the Generalised Linear Mixed Model with independent random effects

    PubMed Central

    Ganguli, B.; Roy, S. Sen; Naskar, M.; Malloy, E. J.; Eisen, E. A.

    2015-01-01

    The Generalised Linear Mixed Model (GLMM) is widely used for modelling environmental data. However, such data are prone to influential observations which can distort the estimated exposure-response curve particularly in regions of high exposure. Deletion diagnostics for iterative estimation schemes commonly derive the deleted estimates based on a single iteration of the full system holding certain pivotal quantities such as the information matrix to be constant. In this paper, we present an approximate formula for the deleted estimates and Cook’s distance for the GLMM which does not assume that the estimates of variance parameters are unaffected by deletion. The procedure allows the user to calculate standardised DFBETAs for mean as well as variance parameters. In certain cases, such as when using the GLMM as a device for smoothing, such residuals for the variance parameters are interesting in their own right. In general, the procedure leads to deleted estimates of mean parameters which are corrected for the effect of deletion on variance components as estimation of the two sets of parameters is interdependent. The probabilistic behaviour of these residuals is investigated and a simulation based procedure suggested for their standardisation. The method is used to identify influential individuals in an occupational cohort exposed to silica. The results show that failure to conduct post model fitting diagnostics for variance components can lead to erroneous conclusions about the fitted curve and unstable confidence intervals. PMID:26626135

  6. Interpretable inference on the mixed effect model with the Box-Cox transformation.

    PubMed

    Maruo, K; Yamaguchi, Y; Noma, H; Gosho, M

    2017-07-10

    We derived results for inference on parameters of the marginal model of the mixed effect model with the Box-Cox transformation based on the asymptotic theory approach. We also provided a robust variance estimator of the maximum likelihood estimator of the parameters of this model in consideration of the model misspecifications. Using these results, we developed an inference procedure for the difference of the model median between treatment groups at the specified occasion in the context of mixed effects models for repeated measures analysis for randomized clinical trials, which provided interpretable estimates of the treatment effect. From simulation studies, it was shown that our proposed method controlled type I error of the statistical test for the model median difference in almost all the situations and had moderate or high performance for power compared with the existing methods. We illustrated our method with cluster of differentiation 4 (CD4) data in an AIDS clinical trial, where the interpretability of the analysis results based on our proposed method is demonstrated. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  7. Toward Contactless Biology: Acoustophoretic DNA Transfection

    NASA Astrophysics Data System (ADS)

    Vasileiou, Thomas; Foresti, Daniele; Bayram, Adem; Poulikakos, Dimos; Ferrari, Aldo

    2016-02-01

    Acoustophoresis revolutionized the field of container-less manipulation of liquids and solids by enabling mixing procedures which avoid contamination and loss of reagents due to the contact with the support. While its applications to chemistry and engineering are straightforward, additional developments are needed to obtain reliable biological protocols in a contactless environment. Here, we provide a first, fundamental step towards biological reactions in air by demonstrating the acoustophoretic DNA transfection of mammalian cells. We developed an original acoustophoretic design capable of levitating, moving and mixing biological suspensions of living mammalians cells and of DNA plasmids. The precise and sequential delivery of the mixed solutions into tissue culture plates is actuated by a novel mechanism based on the controlled actuation of the acoustophoretic force. The viability of the contactless procedure is tested using a cellular model sensitive to small perturbation of neuronal differentiation pathways. Additionally, the efficiency of the transfection procedure is compared to standard, container-based methods for both single and double DNA transfection and for different cell types including adherent growing HeLa cancer cells, and low adhesion neuron-like PC12 cells. In all, this work provides a proof of principle which paves the way to the development of high-throughput acoustophoretic biological reactors.

  8. Toward Contactless Biology: Acoustophoretic DNA Transfection.

    PubMed

    Vasileiou, Thomas; Foresti, Daniele; Bayram, Adem; Poulikakos, Dimos; Ferrari, Aldo

    2016-02-01

    Acoustophoresis revolutionized the field of container-less manipulation of liquids and solids by enabling mixing procedures which avoid contamination and loss of reagents due to the contact with the support. While its applications to chemistry and engineering are straightforward, additional developments are needed to obtain reliable biological protocols in a contactless environment. Here, we provide a first, fundamental step towards biological reactions in air by demonstrating the acoustophoretic DNA transfection of mammalian cells. We developed an original acoustophoretic design capable of levitating, moving and mixing biological suspensions of living mammalians cells and of DNA plasmids. The precise and sequential delivery of the mixed solutions into tissue culture plates is actuated by a novel mechanism based on the controlled actuation of the acoustophoretic force. The viability of the contactless procedure is tested using a cellular model sensitive to small perturbation of neuronal differentiation pathways. Additionally, the efficiency of the transfection procedure is compared to standard, container-based methods for both single and double DNA transfection and for different cell types including adherent growing HeLa cancer cells, and low adhesion neuron-like PC12 cells. In all, this work provides a proof of principle which paves the way to the development of high-throughput acoustophoretic biological reactors.

  9. Toward Contactless Biology: Acoustophoretic DNA Transfection

    PubMed Central

    Vasileiou, Thomas; Foresti, Daniele; Bayram, Adem; Poulikakos, Dimos; Ferrari, Aldo

    2016-01-01

    Acoustophoresis revolutionized the field of container-less manipulation of liquids and solids by enabling mixing procedures which avoid contamination and loss of reagents due to the contact with the support. While its applications to chemistry and engineering are straightforward, additional developments are needed to obtain reliable biological protocols in a contactless environment. Here, we provide a first, fundamental step towards biological reactions in air by demonstrating the acoustophoretic DNA transfection of mammalian cells. We developed an original acoustophoretic design capable of levitating, moving and mixing biological suspensions of living mammalians cells and of DNA plasmids. The precise and sequential delivery of the mixed solutions into tissue culture plates is actuated by a novel mechanism based on the controlled actuation of the acoustophoretic force. The viability of the contactless procedure is tested using a cellular model sensitive to small perturbation of neuronal differentiation pathways. Additionally, the efficiency of the transfection procedure is compared to standard, container-based methods for both single and double DNA transfection and for different cell types including adherent growing HeLa cancer cells, and low adhesion neuron-like PC12 cells. In all, this work provides a proof of principle which paves the way to the development of high-throughput acoustophoretic biological reactors. PMID:26828312

  10. The Evaluation of Bivariate Mixed Models in Meta-analyses of Diagnostic Accuracy Studies with SAS, Stata and R.

    PubMed

    Vogelgesang, Felicitas; Schlattmann, Peter; Dewey, Marc

    2018-05-01

    Meta-analyses require a thoroughly planned procedure to obtain unbiased overall estimates. From a statistical point of view not only model selection but also model implementation in the software affects the results. The present simulation study investigates the accuracy of different implementations of general and generalized bivariate mixed models in SAS (using proc mixed, proc glimmix and proc nlmixed), Stata (using gllamm, xtmelogit and midas) and R (using reitsma from package mada and glmer from package lme4). Both models incorporate the relationship between sensitivity and specificity - the two outcomes of interest in meta-analyses of diagnostic accuracy studies - utilizing random effects. Model performance is compared in nine meta-analytic scenarios reflecting the combination of three sizes for meta-analyses (89, 30 and 10 studies) with three pairs of sensitivity/specificity values (97%/87%; 85%/75%; 90%/93%). The evaluation of accuracy in terms of bias, standard error and mean squared error reveals that all implementations of the generalized bivariate model calculate sensitivity and specificity estimates with deviations less than two percentage points. proc mixed which together with reitsma implements the general bivariate mixed model proposed by Reitsma rather shows convergence problems. The random effect parameters are in general underestimated. This study shows that flexibility and simplicity of model specification together with convergence robustness should influence implementation recommendations, as the accuracy in terms of bias was acceptable in all implementations using the generalized approach. Schattauer GmbH.

  11. Development of a Hybrid RANS/LES Method for Turbulent Mixing Layers

    NASA Technical Reports Server (NTRS)

    Georgiadis, Nicholas J.; Alexander, J. Iwan D.; Reshotko, Eli

    2001-01-01

    Significant research has been underway for several years in NASA Glenn Research Center's nozzle branch to develop advanced computational methods for simulating turbulent flows in exhaust nozzles. The primary efforts of this research have concentrated on improving our ability to calculate the turbulent mixing layers that dominate flows both in the exhaust systems of modern-day aircraft and in those of hypersonic vehicles under development. As part of these efforts, a hybrid numerical method was recently developed to simulate such turbulent mixing layers. The method developed here is intended for configurations in which a dominant structural feature provides an unsteady mechanism to drive the turbulent development in the mixing layer. Interest in Large Eddy Simulation (LES) methods have increased in recent years, but applying an LES method to calculate the wide range of turbulent scales from small eddies in the wall-bounded regions to large eddies in the mixing region is not yet possible with current computers. As a result, the hybrid method developed here uses a Reynolds-averaged Navier-Stokes (RANS) procedure to calculate wall-bounded regions entering a mixing section and uses a LES procedure to calculate the mixing-dominated regions. A numerical technique was developed to enable the use of the hybrid RANS-LES method on stretched, non-Cartesian grids. With this technique, closure for the RANS equations is obtained by using the Cebeci-Smith algebraic turbulence model in conjunction with the wall-function approach of Ota and Goldberg. The LES equations are closed using the Smagorinsky subgrid scale model. Although the function of the Cebeci-Smith model to replace all of the turbulent stresses is quite different from that of the Smagorinsky subgrid model, which only replaces the small subgrid turbulent stresses, both are eddy viscosity models and both are derived at least in part from mixing-length theory. The similar formulation of these two models enables the RANS and LES equations to be solved with a single solution scheme and computational grid. The hybrid RANS-LES method has been applied to a benchmark compressible mixing layer experiment in which two isolated supersonic streams, separated by a splitter plate, provide the flows to a constant-area mixing section. Although the configuration is largely two dimensional in nature, three-dimensional calculations were found to be necessary to enable disturbances to develop in three spatial directions and to transition to turbulence. The flow in the initial part of the mixing section consists of a periodic vortex shedding downstream of the splitter plate trailing edge. This organized vortex shedding then rapidly transitions to a turbulent structure, which is very similar to the flow development observed in the experiments. Although the qualitative nature of the large-scale turbulent development in the entire mixing section is captured well by the LES part of the current hybrid method, further efforts are planned to directly calculate a greater portion of the turbulence spectrum and to limit the subgrid scale modeling to only the very small scales. This will be accomplished by the use of higher accuracy solution schemes and more powerful computers, measured both in speed and memory capabilities.

  12. Image guided percutaneous spine procedures using an optical see-through head mounted display: proof of concept and rationale.

    PubMed

    Deib, Gerard; Johnson, Alex; Unberath, Mathias; Yu, Kevin; Andress, Sebastian; Qian, Long; Osgood, Gregory; Navab, Nassir; Hui, Ferdinand; Gailloud, Philippe

    2018-05-30

    Optical see-through head mounted displays (OST-HMDs) offer a mixed reality (MixR) experience with unhindered procedural site visualization during procedures using high resolution radiographic imaging. This technical note describes our preliminary experience with percutaneous spine procedures utilizing OST-HMD as an alternative to traditional angiography suite monitors. MixR visualization was achieved using the Microsoft HoloLens system. Various spine procedures (vertebroplasty, kyphoplasty, and percutaneous discectomy) were performed on a lumbar spine phantom with commercially available devices. The HMD created a real time MixR environment by superimposing virtual posteroanterior and lateral views onto the interventionalist's field of view. The procedures were filmed from the operator's perspective. Videos were reviewed to assess whether key anatomic landmarks and materials were reliably visualized. Dosimetry and procedural times were recorded. The operator completed a questionnaire following each procedure, detailing benefits, limitations, and visualization mode preferences. Percutaneous vertebroplasty, kyphoplasty, and discectomy procedures were successfully performed using OST-HMD image guidance on a lumbar spine phantom. Dosimetry and procedural time compared favorably with typical procedural times. Conventional and MixR visualization modes were equally effective in providing image guidance, with key anatomic landmarks and materials reliably visualized. This preliminary study demonstrates the feasibility of utilizing OST-HMDs for image guidance in interventional spine procedures. This novel visualization approach may serve as a valuable adjunct tool during minimally invasive percutaneous spine treatment. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  13. Mixed models and reduction method for dynamic analysis of anisotropic shells

    NASA Technical Reports Server (NTRS)

    Noor, A. K.; Peters, J. M.

    1985-01-01

    A time-domain computational procedure is presented for predicting the dynamic response of laminated anisotropic shells. The two key elements of the procedure are: (1) use of mixed finite element models having independent interpolation (shape) functions for stress resultants and generalized displacements for the spatial discretization of the shell, with the stress resultants allowed to be discontinuous at interelement boundaries; and (2) use of a dynamic reduction method, with the global approximation vectors consisting of the static solution and an orthogonal set of Lanczos vectors. The dynamic reduction is accomplished by means of successive application of the finite element method and the classical Rayleigh-Ritz technique. The finite element method is first used to generate the global approximation vectors. Then the Rayleigh-Ritz technique is used to generate a reduced system of ordinary differential equations in the amplitudes of these modes. The temporal integration of the reduced differential equations is performed by using an explicit half-station central difference scheme (Leap-frog method). The effectiveness of the proposed procedure is demonstrated by means of a numerical example and its advantages over reduction methods used with the displacement formulation are discussed.

  14. Mixed-reality simulation for neurosurgical procedures.

    PubMed

    Bova, Frank J; Rajon, Didier A; Friedman, William A; Murad, Gregory J; Hoh, Daniel J; Jacob, R Patrick; Lampotang, Samsun; Lizdas, David E; Lombard, Gwen; Lister, J Richard

    2013-10-01

    Surgical education is moving rapidly to the use of simulation for technical training of residents and maintenance or upgrading of surgical skills in clinical practice. To optimize the learning exercise, it is essential that both visual and haptic cues are presented to best present a real-world experience. Many systems attempt to achieve this goal through a total virtual interface. To demonstrate that the most critical aspect in optimizing a simulation experience is to provide the visual and haptic cues, allowing the training to fully mimic the real-world environment. Our approach has been to create a mixed-reality system consisting of a physical and a virtual component. A physical model of the head or spine is created with a 3-dimensional printer using deidentified patient data. The model is linked to a virtual radiographic system or an image guidance platform. A variety of surgical challenges can be presented in which the trainee must use the same anatomic and radiographic references required during actual surgical procedures. Using the aforementioned techniques, we have created simulators for ventriculostomy, percutaneous stereotactic lesion procedure for trigeminal neuralgia, and spinal instrumentation. The design and implementation of these platforms are presented. The system has provided the residents an opportunity to understand and appreciate the complex 3-dimensional anatomy of the 3 neurosurgical procedures simulated. The systems have also provided an opportunity to break procedures down into critical segments, allowing the user to concentrate on specific areas of deficiency.

  15. Spectral Upscaling for Graph Laplacian Problems with Application to Reservoir Simulation

    DOE PAGES

    Barker, Andrew T.; Lee, Chak S.; Vassilevski, Panayot S.

    2017-10-26

    Here, we consider coarsening procedures for graph Laplacian problems written in a mixed saddle-point form. In that form, in addition to the original (vertex) degrees of freedom (dofs), we also have edge degrees of freedom. We extend previously developed aggregation-based coarsening procedures applied to both sets of dofs to now allow more than one coarse vertex dof per aggregate. Those dofs are selected as certain eigenvectors of local graph Laplacians associated with each aggregate. Additionally, we coarsen the edge dofs by using traces of the discrete gradients of the already constructed coarse vertex dofs. These traces are defined on themore » interface edges that connect any two adjacent aggregates. The overall procedure is a modification of the spectral upscaling procedure developed in for the mixed finite element discretization of diffusion type PDEs which has the important property of maintaining inf-sup stability on coarse levels and having provable approximation properties. We consider applications to partitioning a general graph and to a finite volume discretization interpreted as a graph Laplacian, developing consistent and accurate coarse-scale models of a fine-scale problem.« less

  16. Rapid and Efficient Filtration-Based Procedure for Separation and Safe Analysis of CBRN Mixed Samples

    PubMed Central

    Bentahir, Mostafa; Laduron, Frederic; Irenge, Leonid; Ambroise, Jérôme; Gala, Jean-Luc

    2014-01-01

    Separating CBRN mixed samples that contain both chemical and biological warfare agents (CB mixed sample) in liquid and solid matrices remains a very challenging issue. Parameters were set up to assess the performance of a simple filtration-based method first optimized on separate C- and B-agents, and then assessed on a model of CB mixed sample. In this model, MS2 bacteriophage, Autographa californica nuclear polyhedrosis baculovirus (AcNPV), Bacillus atrophaeus and Bacillus subtilis spores were used as biological agent simulants whereas ethyl methylphosphonic acid (EMPA) and pinacolyl methylphophonic acid (PMPA) were used as VX and soman (GD) nerve agent surrogates, respectively. Nanoseparation centrifugal devices with various pore size cut-off (30 kD up to 0.45 µm) and three RNA extraction methods (Invisorb, EZ1 and Nuclisens) were compared. RNA (MS2) and DNA (AcNPV) quantification was carried out by means of specific and sensitive quantitative real-time PCRs (qPCR). Liquid chromatography coupled to time-of-flight mass spectrometry (LC/TOFMS) methods was used for quantifying EMPA and PMPA. Culture methods and qPCR demonstrated that membranes with a 30 kD cut-off retain more than 99.99% of biological agents (MS2, AcNPV, Bacillus Atrophaeus and Bacillus subtilis spores) tested separately. A rapid and reliable separation of CB mixed sample models (MS2/PEG-400 and MS2/EMPA/PMPA) contained in simple liquid or complex matrices such as sand and soil was also successfully achieved on a 30 kD filter with more than 99.99% retention of MS2 on the filter membrane, and up to 99% of PEG-400, EMPA and PMPA recovery in the filtrate. The whole separation process turnaround-time (TAT) was less than 10 minutes. The filtration method appears to be rapid, versatile and extremely efficient. The separation method developed in this work constitutes therefore a useful model for further evaluating and comparing additional separation alternative procedures for a safe handling and preparation of CB mixed samples. PMID:24505375

  17. Simultaneous diagnosis of radial profiles and mix in NIF ignition-scale implosions via X-ray spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ciricosta, O.; Scott, H.; Durey, P.

    In a National Ignition Facility implosion, hydrodynamic instabilities may cause the cold material from the imploding shell to be injected into the hot-spot (hot-spot mix), enhancing the radiative and conductive losses, which in turn may lead to a quenching of the ignition process. The bound-bound features of the spectrum emitted by high-Z ablator dopants that get mixed into the hot-spot have been previously used to infer the total amount of mixed mass; however, the typical errorbars are larger than the maximum tolerable mix. We present in this paper an improved 2D model for mix spectroscopy which can be used tomore » retrieve information on both the amount of mixed mass and the full imploded plasma profile. By performing radiation transfer and simultaneously fitting all of the features exhibited by the spectra, we are able to constrain self-consistently the effect of the opacity of the external layers of the target on the emission, thus improving the accuracy of the inferred mixed mass. The model's predictive capabilities are first validated by fitting simulated spectra arising from fully characterized hydrodynamic simulations, and then, the model is applied to previously published experimental results, providing values of mix mass in agreement with previous estimates. Finally, we show that the new self consistent procedure leads to better constrained estimates of mix and also provides insight into the sensitivity of the hot-spot spectroscopy to the spatial properties of the imploded capsule, such as the in-flight aspect ratio of the cold fuel surrounding the hotspot.« less

  18. Simultaneous diagnosis of radial profiles and mix in NIF ignition-scale implosions via X-ray spectroscopy

    DOE PAGES

    Ciricosta, O.; Scott, H.; Durey, P.; ...

    2017-11-06

    In a National Ignition Facility implosion, hydrodynamic instabilities may cause the cold material from the imploding shell to be injected into the hot-spot (hot-spot mix), enhancing the radiative and conductive losses, which in turn may lead to a quenching of the ignition process. The bound-bound features of the spectrum emitted by high-Z ablator dopants that get mixed into the hot-spot have been previously used to infer the total amount of mixed mass; however, the typical errorbars are larger than the maximum tolerable mix. We present in this paper an improved 2D model for mix spectroscopy which can be used tomore » retrieve information on both the amount of mixed mass and the full imploded plasma profile. By performing radiation transfer and simultaneously fitting all of the features exhibited by the spectra, we are able to constrain self-consistently the effect of the opacity of the external layers of the target on the emission, thus improving the accuracy of the inferred mixed mass. The model's predictive capabilities are first validated by fitting simulated spectra arising from fully characterized hydrodynamic simulations, and then, the model is applied to previously published experimental results, providing values of mix mass in agreement with previous estimates. Finally, we show that the new self consistent procedure leads to better constrained estimates of mix and also provides insight into the sensitivity of the hot-spot spectroscopy to the spatial properties of the imploded capsule, such as the in-flight aspect ratio of the cold fuel surrounding the hotspot.« less

  19. Simultaneous diagnosis of radial profiles and mix in NIF ignition-scale implosions via X-ray spectroscopy

    NASA Astrophysics Data System (ADS)

    Ciricosta, O.; Scott, H.; Durey, P.; Hammel, B. A.; Epstein, R.; Preston, T. R.; Regan, S. P.; Vinko, S. M.; Woolsey, N. C.; Wark, J. S.

    2017-11-01

    In a National Ignition Facility implosion, hydrodynamic instabilities may cause the cold material from the imploding shell to be injected into the hot-spot (hot-spot mix), enhancing the radiative and conductive losses, which in turn may lead to a quenching of the ignition process. The bound-bound features of the spectrum emitted by high-Z ablator dopants that get mixed into the hot-spot have been previously used to infer the total amount of mixed mass; however, the typical errorbars are larger than the maximum tolerable mix. We present here an improved 2D model for mix spectroscopy which can be used to retrieve information on both the amount of mixed mass and the full imploded plasma profile. By performing radiation transfer and simultaneously fitting all of the features exhibited by the spectra, we are able to constrain self-consistently the effect of the opacity of the external layers of the target on the emission, thus improving the accuracy of the inferred mixed mass. The model's predictive capabilities are first validated by fitting simulated spectra arising from fully characterized hydrodynamic simulations, and then, the model is applied to previously published experimental results, providing values of mix mass in agreement with previous estimates. We show that the new self consistent procedure leads to better constrained estimates of mix and also provides insight into the sensitivity of the hot-spot spectroscopy to the spatial properties of the imploded capsule, such as the in-flight aspect ratio of the cold fuel surrounding the hotspot.

  20. High Performance, Robust Control of Flexible Space Structures: MSFC Center Director's Discretionary Fund

    NASA Technical Reports Server (NTRS)

    Whorton, M. S.

    1998-01-01

    Many spacecraft systems have ambitious objectives that place stringent requirements on control systems. Achievable performance is often limited because of difficulty of obtaining accurate models for flexible space structures. To achieve sufficiently high performance to accomplish mission objectives may require the ability to refine the control design model based on closed-loop test data and tune the controller based on the refined model. A control system design procedure is developed based on mixed H2/H(infinity) optimization to synthesize a set of controllers explicitly trading between nominal performance and robust stability. A homotopy algorithm is presented which generates a trajectory of gains that may be implemented to determine maximum achievable performance for a given model error bound. Examples show that a better balance between robustness and performance is obtained using the mixed H2/H(infinity) design method than either H2 or mu-synthesis control design. A second contribution is a new procedure for closed-loop system identification which refines parameters of a control design model in a canonical realization. Examples demonstrate convergence of the parameter estimation and improved performance realized by using the refined model for controller redesign. These developments result in an effective mechanism for achieving high-performance control of flexible space structures.

  1. Mixed-Mode Decohesion Elements for Analyses of Progressive Delamination

    NASA Technical Reports Server (NTRS)

    Davila, Carlos G.; Camanho, Pedro P.; deMoura, Marcelo F.

    2001-01-01

    A new 8-node decohesion element with mixed mode capability is proposed and demonstrated. The element is used at the interface between solid finite elements to model the initiation and propagation of delamination. A single displacement-based damage parameter is used in a strain softening law to track the damage state of the interface. The method can be used in conjunction with conventional material degradation procedures to account for inplane and intra-laminar damage modes. The accuracy of the predictions is evaluated in single mode delamination tests, in the mixed-mode bending test, and in a structural configuration consisting of the debonding of a stiffener flange from its skin.

  2. Mixed reality simulation of rasping procedure in artificial cervical disc replacement (ACDR) surgery.

    PubMed

    Halic, Tansel; Kockara, Sinan; Bayrak, Coskun; Rowe, Richard

    2010-10-07

    Until quite recently spinal disorder problems in the U.S. have been operated by fusing cervical vertebrae instead of replacement of the cervical disc with an artificial disc. Cervical disc replacement is a recently approved procedure in the U.S. It is one of the most challenging surgical procedures in the medical field due to the deficiencies in available diagnostic tools and insufficient number of surgical practices For physicians and surgical instrument developers, it is critical to understand how to successfully deploy the new artificial disc replacement systems. Without proper understanding and practice of the deployment procedure, it is possible to injure the vertebral body. Mixed reality (MR) and virtual reality (VR) surgical simulators are becoming an indispensable part of physicians' training, since they offer a risk free training environment. In this study, MR simulation framework and intricacies involved in the development of a MR simulator for the rasping procedure in artificial cervical disc replacement (ACDR) surgery are investigated. The major components that make up the MR surgical simulator with motion tracking system are addressed. A mixed reality surgical simulator that targets rasping procedure in the artificial cervical disc replacement surgery with a VICON motion tracking system was developed. There were several challenges in the development of MR surgical simulator. First, the assembly of different hardware components for surgical simulation development that involves knowledge and application of interdisciplinary fields such as signal processing, computer vision and graphics, along with the design and placements of sensors etc . Second challenge was the creation of a physically correct model of the rasping procedure in order to attain critical forces. This challenge was handled with finite element modeling. The third challenge was minimization of error in mapping movements of an actor in real model to a virtual model in a process called registration. This issue was overcome by a two-way (virtual object to real domain and real domain to virtual object) semi-automatic registration method. The applicability of the VICON MR setting for the ACDR surgical simulator is demonstrated. The main stream problems encountered in MR surgical simulator development are addressed. First, an effective environment for MR surgical development is constructed. Second, the strain and the stress intensities and critical forces are simulated under the various rasp instrument loadings with impacts that are applied on intervertebral surfaces of the anterior vertebrae throughout the rasping procedure. Third, two approaches are introduced to solve the registration problem in MR setting. Results show that our system creates an effective environment for surgical simulation development and solves tedious and time-consuming registration problems caused by misalignments. Further, the MR ACDR surgery simulator was tested by 5 different physicians who found that the MR simulator is effective enough to teach the anatomical details of cervical discs and to grasp the basics of the ACDR surgery and rasping procedure.

  3. Mixed reality simulation of rasping procedure in artificial cervical disc replacement (ACDR) surgery

    PubMed Central

    2010-01-01

    Background Until quite recently spinal disorder problems in the U.S. have been operated by fusing cervical vertebrae instead of replacement of the cervical disc with an artificial disc. Cervical disc replacement is a recently approved procedure in the U.S. It is one of the most challenging surgical procedures in the medical field due to the deficiencies in available diagnostic tools and insufficient number of surgical practices For physicians and surgical instrument developers, it is critical to understand how to successfully deploy the new artificial disc replacement systems. Without proper understanding and practice of the deployment procedure, it is possible to injure the vertebral body. Mixed reality (MR) and virtual reality (VR) surgical simulators are becoming an indispensable part of physicians’ training, since they offer a risk free training environment. In this study, MR simulation framework and intricacies involved in the development of a MR simulator for the rasping procedure in artificial cervical disc replacement (ACDR) surgery are investigated. The major components that make up the MR surgical simulator with motion tracking system are addressed. Findings A mixed reality surgical simulator that targets rasping procedure in the artificial cervical disc replacement surgery with a VICON motion tracking system was developed. There were several challenges in the development of MR surgical simulator. First, the assembly of different hardware components for surgical simulation development that involves knowledge and application of interdisciplinary fields such as signal processing, computer vision and graphics, along with the design and placements of sensors etc . Second challenge was the creation of a physically correct model of the rasping procedure in order to attain critical forces. This challenge was handled with finite element modeling. The third challenge was minimization of error in mapping movements of an actor in real model to a virtual model in a process called registration. This issue was overcome by a two-way (virtual object to real domain and real domain to virtual object) semi-automatic registration method. Conclusions The applicability of the VICON MR setting for the ACDR surgical simulator is demonstrated. The main stream problems encountered in MR surgical simulator development are addressed. First, an effective environment for MR surgical development is constructed. Second, the strain and the stress intensities and critical forces are simulated under the various rasp instrument loadings with impacts that are applied on intervertebral surfaces of the anterior vertebrae throughout the rasping procedure. Third, two approaches are introduced to solve the registration problem in MR setting. Results show that our system creates an effective environment for surgical simulation development and solves tedious and time-consuming registration problems caused by misalignments. Further, the MR ACDR surgery simulator was tested by 5 different physicians who found that the MR simulator is effective enough to teach the anatomical details of cervical discs and to grasp the basics of the ACDR surgery and rasping procedure PMID:20946594

  4. Auditory phase and frequency discrimination: a comparison of nine procedures.

    PubMed

    Creelman, C D; Macmillan, N A

    1979-02-01

    Two auditory discrimination tasks were thoroughly investigated: discrimination of frequency differences from a sinusoidal signal of 200 Hz and discrimination of differences in relative phase of mixed sinusoids of 200 Hz and 400 Hz. For each task psychometric functions were constructed for three observers, using nine different psychophysical measurement procedures. These procedures included yes-no, two-interval forced-choice, and various fixed- and variable-standard designs that investigators have used in recent years. The data showed wide ranges of apparent sensitivity. For frequency discrimination, models derived from signal detection theory for each psychophysical procedure seem to account for the performance differences. For phase discrimination the models do not account for the data. We conclude that for some discriminative continua the assumptions of signal detection theory are appropriate, and underlying sensitivity may be derived from raw data by appropriate transformations. For other continua the models of signal detection theory are probably inappropriate; we speculate that phase might be discriminable only on the basis of comparison or change and suggest some tests of our hypothesis.

  5. Do Mixed-Flora Preoperative Urine Cultures Matter?

    PubMed

    Polin, Michael R; Kawasaki, Amie; Amundsen, Cindy L; Weidner, Alison C; Siddiqui, Nazema Y

    2017-06-01

    To determine whether mixed-flora preoperative urine cultures, as compared with no-growth preoperative urine cultures, are associated with a higher prevalence of postoperative urinary tract infections (UTIs). This was a retrospective cohort study. Women who underwent urogynecologic surgery were included if their preoperative clean-catch urine culture result was mixed flora or no growth. Women were excluded if they received postoperative antibiotics for reasons other than treatment of a UTI. Women were divided into two cohorts based on preoperative urine culture results-mixed flora or no growth; the prevalence of postoperative UTI was compared between cohorts. Baseline characteristics were compared using χ 2 or Student t tests. A logistic regression analysis then was performed. We included 282 women who were predominantly postmenopausal, white, and overweight. There were many concomitant procedures; 46% underwent a midurethral sling procedure and 68% underwent pelvic organ prolapse surgery. Preoperative urine cultures resulted as mixed flora in 192 (68%) and no growth in 90 (32%) patients. Overall, 14% were treated for a UTI postoperatively. There was no difference in the proportion of patients treated for a postoperative UTI between the two cohorts (25 mixed flora vs 13 no growth, P = 0.77). These results remained when controlling for potentially confounding variables in a logistic regression model (adjusted odds ratio 0.92, 95% confidence interval 0.43-1.96). In women with mixed-flora compared with no-growth preoperative urine cultures, there were no differences in the prevalence of postoperative UTI. The clinical practice of interpreting mixed-flora cultures as negative is appropriate.

  6. Probe-specific mixed-model approach to detect copy number differences using multiplex ligation-dependent probe amplification (MLPA)

    PubMed Central

    González, Juan R; Carrasco, Josep L; Armengol, Lluís; Villatoro, Sergi; Jover, Lluís; Yasui, Yutaka; Estivill, Xavier

    2008-01-01

    Background MLPA method is a potentially useful semi-quantitative method to detect copy number alterations in targeted regions. In this paper, we propose a method for the normalization procedure based on a non-linear mixed-model, as well as a new approach for determining the statistical significance of altered probes based on linear mixed-model. This method establishes a threshold by using different tolerance intervals that accommodates the specific random error variability observed in each test sample. Results Through simulation studies we have shown that our proposed method outperforms two existing methods that are based on simple threshold rules or iterative regression. We have illustrated the method using a controlled MLPA assay in which targeted regions are variable in copy number in individuals suffering from different disorders such as Prader-Willi, DiGeorge or Autism showing the best performace. Conclusion Using the proposed mixed-model, we are able to determine thresholds to decide whether a region is altered. These threholds are specific for each individual, incorporating experimental variability, resulting in improved sensitivity and specificity as the examples with real data have revealed. PMID:18522760

  7. Mixed model approaches for diallel analysis based on a bio-model.

    PubMed

    Zhu, J; Weir, B S

    1996-12-01

    A MINQUE(1) procedure, which is minimum norm quadratic unbiased estimation (MINQUE) method with 1 for all the prior values, is suggested for estimating variance and covariance components in a bio-model for diallel crosses. Unbiasedness and efficiency of estimation were compared for MINQUE(1), restricted maximum likelihood (REML) and MINQUE theta which has parameter values for the prior values. MINQUE(1) is almost as efficient as MINQUE theta for unbiased estimation of genetic variance and covariance components. The bio-model is efficient and robust for estimating variance and covariance components for maternal and paternal effects as well as for nuclear effects. A procedure of adjusted unbiased prediction (AUP) is proposed for predicting random genetic effects in the bio-model. The jack-knife procedure is suggested for estimation of sampling variances of estimated variance and covariance components and of predicted genetic effects. Worked examples are given for estimation of variance and covariance components and for prediction of genetic merits.

  8. Empirical-statistical downscaling of reanalysis data to high-resolution air temperature and specific humidity above a glacier surface (Cordillera Blanca, Peru)

    NASA Astrophysics Data System (ADS)

    Hofer, Marlis; MöLg, Thomas; Marzeion, Ben; Kaser, Georg

    2010-06-01

    Recently initiated observation networks in the Cordillera Blanca (Peru) provide temporally high-resolution, yet short-term, atmospheric data. The aim of this study is to extend the existing time series into the past. We present an empirical-statistical downscaling (ESD) model that links 6-hourly National Centers for Environmental Prediction (NCEP)/National Center for Atmospheric Research (NCAR) reanalysis data to air temperature and specific humidity, measured at the tropical glacier Artesonraju (northern Cordillera Blanca). The ESD modeling procedure includes combined empirical orthogonal function and multiple regression analyses and a double cross-validation scheme for model evaluation. Apart from the selection of predictor fields, the modeling procedure is automated and does not include subjective choices. We assess the ESD model sensitivity to the predictor choice using both single-field and mixed-field predictors. Statistical transfer functions are derived individually for different months and times of day. The forecast skill largely depends on month and time of day, ranging from 0 to 0.8. The mixed-field predictors perform better than the single-field predictors. The ESD model shows added value, at all time scales, against simpler reference models (e.g., the direct use of reanalysis grid point values). The ESD model forecast 1960-2008 clearly reflects interannual variability related to the El Niño/Southern Oscillation but is sensitive to the chosen predictor type.

  9. The ultrasound-enhanced bioscouring performance of four polygalacturonase enzymes obtained from rhizopus oryzae

    USDA-ARS?s Scientific Manuscript database

    An analytical and statistical method has been developed to measure the ultrasound-enhanced bioscouring performance of milligram quantities of endo- and exo-polygalacturonase enzymes obtained from Rhizopus oryzae fungi. UV-Vis spectrophotometric data and a general linear mixed models procedure indic...

  10. Neutral kaon mixing beyond the Standard Model with n f = 2 + 1 chiral fermions. Part 2: non perturbative renormalisation of the Δ F = 2 four-quark operators

    NASA Astrophysics Data System (ADS)

    Boyle, Peter A.; Garron, Nicolas; Hudspith, Renwick J.; Lehner, Christoph; Lytle, Andrew T.

    2017-10-01

    We compute the renormalisation factors ( Z-matrices) of the Δ F = 2 four-quark operators needed for Beyond the Standard Model (BSM) kaon mixing. We work with n f = 2+1 flavours of Domain-Wall fermions whose chiral-flavour properties are essential to maintain a continuum-like mixing pattern. We introduce new RI-SMOM renormalisation schemes, which we argue are better behaved compared to the commonly-used corresponding RI-MOM one. We find that, once converted to \\overline{MS} , the Z-factors computed through these RI-SMOM schemes are in good agreement but differ significantly from the ones computed through the RI-MOM scheme. The RI-SMOM Z-factors presented here have been used to compute the BSM neutral kaon mixing matrix elements in the companion paper [1]. We argue that the renormalisation procedure is responsible for the discrepancies observed by different collaborations, we will investigate and elucidate the origin of these differences throughout this work.

  11. Generalized linear mixed models with varying coefficients for longitudinal data.

    PubMed

    Zhang, Daowen

    2004-03-01

    The routinely assumed parametric functional form in the linear predictor of a generalized linear mixed model for longitudinal data may be too restrictive to represent true underlying covariate effects. We relax this assumption by representing these covariate effects by smooth but otherwise arbitrary functions of time, with random effects used to model the correlation induced by among-subject and within-subject variation. Due to the usually intractable integration involved in evaluating the quasi-likelihood function, the double penalized quasi-likelihood (DPQL) approach of Lin and Zhang (1999, Journal of the Royal Statistical Society, Series B61, 381-400) is used to estimate the varying coefficients and the variance components simultaneously by representing a nonparametric function by a linear combination of fixed effects and random effects. A scaled chi-squared test based on the mixed model representation of the proposed model is developed to test whether an underlying varying coefficient is a polynomial of certain degree. We evaluate the performance of the procedures through simulation studies and illustrate their application with Indonesian children infectious disease data.

  12. A Modular Set of Mixed Reality Simulators for Blind and Guided Procedures

    DTIC Science & Technology

    2016-08-01

    AWARD NUMBER: W81XWH-14-1-0113 TITLE: A Modular Set of Mixed Reality Simulators for “blind” and Guided Procedures PRINCIPAL INVESTIGATOR...2015 – 07/31/2016 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER A Modular Set of Mixed Reality Simulators for “Blind” and Guided Procedures 5b...editor developed to facilitate creation by non-technical educators of ITs for the set of modular simulators, (c) a curriculum for self-study and self

  13. Modeling and analysis of the space shuttle nose-gear tire with semianalytic finite elements

    NASA Technical Reports Server (NTRS)

    Kim, Kyun O.; Noor, Ahmed K.; Tanner, John A.

    1990-01-01

    A computational procedure is presented for the geometrically nonlinear analysis of aircraft tires. The Space Shuttle Orbiter nose gear tire was modeled by using a two-dimensional laminated anisotropic shell theory with the effects of variation in material and geometric parameters included. The four key elements of the procedure are: (1) semianalytic finite elements in which the shell variables are represented by Fourier series in the circumferential direction and piecewise polynominals in the meridional direction; (2) a mixed formulation with the fundamental unknowns consisting of strain parameters, stress-resultant parameters, and generalized displacements; (3) multilevel operator splitting to effect successive simplifications, and to uncouple the equations associated with different Fourier harmonics; and (4) multilevel iterative procedures and reduction techniques to generate the response of the shell. Numerical results of the Space Shuttle Orbiter nose gear tire model are compared with experimental measurements of the tire subjected to inflation loading.

  14. A Method for Calculating Strain Energy Release Rates in Preliminary Design of Composite Skin/Stringer Debonding Under Multi-Axial Loading

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald; Minguet, Pierre J.; OBrien, T. Kevin

    1999-01-01

    Three simple procedures were developed to determine strain energy release rates, G, in composite skin/stringer specimens for various combinations of unaxial and biaxial (in-plane/out-of-plane) loading conditions. These procedures may be used for parametric design studies in such a way that only a few finite element computations will be necessary for a study of many load combinations. The results were compared with mixed mode strain energy release rates calculated directly from nonlinear two-dimensional plane-strain finite element analyses using the virtual crack closure technique. The first procedure involved solving three unknown parameters needed to determine the energy release rates. Good agreement was obtained when the external loads were used in the expression derived. This superposition technique was only applicable if the structure exhibits a linear load/deflection behavior. Consequently, a second technique was derived which was applicable in the case of nonlinear load/deformation behavior. The technique involved calculating six unknown parameters from a set of six simultaneous linear equations with data from six nonlinear analyses to determine the energy release rates. This procedure was not time efficient, and hence, less appealing. A third procedure was developed to calculate mixed mode energy release rates as a function of delamination lengths. This procedure required only one nonlinear finite element analysis of the specimen with a single delamination length to obtain a reference solution for the energy release rates and the scale factors. The delamination was extended in three separate linear models of the local area in the vicinity of the delamination subjected to unit loads to obtain the distribution of G with delamination lengths. This set of sub-problems was Although additional modeling effort is required to create the sub- models, this local technique is efficient for parametric studies.

  15. Minimization of required model runs in the Random Mixing approach to inverse groundwater flow and transport modeling

    NASA Astrophysics Data System (ADS)

    Hoerning, Sebastian; Bardossy, Andras; du Plessis, Jaco

    2017-04-01

    Most geostatistical inverse groundwater flow and transport modelling approaches utilize a numerical solver to minimize the discrepancy between observed and simulated hydraulic heads and/or hydraulic concentration values. The optimization procedure often requires many model runs, which for complex models lead to long run times. Random Mixing is a promising new geostatistical technique for inverse modelling. The method is an extension of the gradual deformation approach. It works by finding a field which preserves the covariance structure and maintains observed hydraulic conductivities. This field is perturbed by mixing it with new fields that fulfill the homogeneous conditions. This mixing is expressed as an optimization problem which aims to minimize the difference between the observed and simulated hydraulic heads and/or concentration values. To preserve the spatial structure, the mixing weights must lie on the unit hyper-sphere. We present a modification to the Random Mixing algorithm which significantly reduces the number of model runs required. The approach involves taking n equally spaced points on the unit circle as weights for mixing conditional random fields. Each of these mixtures provides a solution to the forward model at the conditioning locations. For each of the locations the solutions are then interpolated around the circle to provide solutions for additional mixing weights at very low computational cost. The interpolated solutions are used to search for a mixture which maximally reduces the objective function. This is in contrast to other approaches which evaluate the objective function for the n mixtures and then interpolate the obtained values. Keeping the mixture on the unit circle makes it easy to generate equidistant sampling points in the space; however, this means that only two fields are mixed at a time. Once the optimal mixture for two fields has been found, they are combined to form the input to the next iteration of the algorithm. This process is repeated until a threshold in the objective function is met or insufficient changes are produced in successive iterations.

  16. Additive mixed effect model for recurrent gap time data.

    PubMed

    Ding, Jieli; Sun, Liuquan

    2017-04-01

    Gap times between recurrent events are often of primary interest in medical and observational studies. The additive hazards model, focusing on risk differences rather than risk ratios, has been widely used in practice. However, the marginal additive hazards model does not take the dependence among gap times into account. In this paper, we propose an additive mixed effect model to analyze gap time data, and the proposed model includes a subject-specific random effect to account for the dependence among the gap times. Estimating equation approaches are developed for parameter estimation, and the asymptotic properties of the resulting estimators are established. In addition, some graphical and numerical procedures are presented for model checking. The finite sample behavior of the proposed methods is evaluated through simulation studies, and an application to a data set from a clinic study on chronic granulomatous disease is provided.

  17. Factors that influence length of stay for in-patient gynaecology surgery: is the Case Mix Group (CMG) or type of procedure more important?

    PubMed

    Carey, Mark S; Victory, Rahi; Stitt, Larry; Tsang, Nicole

    2006-02-01

    To compare the association between the Case Mix Group (CMG) code and length of stay (LOS) with the association between the type of procedure and LOS in patients admitted for gynaecology surgery. We examined the records of women admitted for surgery in CMG 579 (major uterine/adnexal procedure, no malignancy) or 577 (major surgery ovary/adnexa with malignancy) between April 1997 and March 1999. Factors thought to influence LOS included age, weight, American Society of Anesthesiologists (ASA) score, physician, day of the week on which surgery was performed, and procedure type. Procedures were divided into six categories, four for CMG 579 and two for CMG 577. Data were abstracted from the hospital information costing system (T2 system) and by retrospective chart review. Multivariable analysis was performed using linear regression with backwards elimination. There were 606 patients in CMG 579 and 101 patients in CMG 577, and the corresponding median LOS was four days (range 1-19) for CMG 579 and nine days (range 3-30) for CMG 577. Combined analysis of both CMGs 577 and 579 revealed the following factors as highly significant determinants of LOS: procedure, age, physician, and ASA score. Although confounded by procedure type, the CMG did not significantly account for differences in LOS in the model if procedure was considered. Pairwise comparisons of procedure categories were all found to be statistically significant, even when controlled for other important variables. The type of procedure better accounts for differences in LOS by describing six statistically distinct procedure groups rather than the traditional two CMGs. It is reasonable therefore to consider changing the current CMG codes for gynaecology to a classification based on the type of procedure.

  18. Study of the Fine-Scale Structure of Cumulus Clouds.

    NASA Astrophysics Data System (ADS)

    Rodi, Alfred R.

    Small cumulus clouds are studied using data from an instrumented aircraft. Two aspects of the role of turbulence and mixing in these couds are examined: (1) the effect of mixing on the droplet size distribution, and (2) the effect of turbulence on the spread of ice crystal plumes artificially generated with cloud seeding agents. The data were collected in the course of the Bureau of Reclamation's High Plains Cooperative Experiment (HIPLEX) in Montana in the summers of 1978-80 by the University of Wyoming King Air aircraft. The shape of the cloud droplet spectrum as measured by the Particle Measuring Systems (PMS) Forward Scattering Spectrometer Probe (FSSP) is found to be very sensitive to entrainment of dry environmental air into the cloud. The narrowest cloud droplet spectra, the highest droplet concentrations, and the largest sized droplets are found in the cloud parcels which are least affected by entrainment. The most dilute regions of cloud exhibit the broadest spectra which are frequently bimodal. A procedure for measuring cloud inhomogeneity from FSSP is developed. The data shows that the clouds are extremely inhomogeneous in structure. Current models of inhomogeneous mixing are shown to be inadequate in explaining droplet spectrum effects. However, the inhomogeneous models characterize the data far better than classical models of droplet spectrum evolution. High resolution measurements of ice crystals from the PMS two dimensional imaging probe are used to characterize the spread of the ice crystal plume in seeded clouds. Plume spread is found to be a very complicated process which is in some cases dominated by organized motions in the cloud. As a result, classical diffusion theory is often inadequate to predict plume growth. The turbulent diffusion that occurs is shown to be best modeled using the relative diffusion concept of Richardson. Procedures for adapting aircraft data to the relative diffusion model are developed, including techniques for converting the aircraft Eulerian data into estimates of Lagrangian correlations. Predictions of the model are compared with observations of plume growth. A detailed analysis of errors in the air motion sensing system on the aircraft is presented. A procedure is developed to estimate the errors due to aircraft gyroscope sensitivity to horizontal accelerations.

  19. Study on processing immiscible materials in zero gravity

    NASA Technical Reports Server (NTRS)

    Reger, J. L.; Mendelson, R. A.

    1975-01-01

    An experimental investigation was conducted to evaluate mixing immiscible metal combinations under several process conditions. Under one-gravity, these included thermal processing, thermal plus electromagnetic mixing, and thermal plus acoustic mixing. The same process methods were applied during free fall on the MSFC drop tower facility. The design is included of drop tower apparatus to provide the electromagnetic and acoustic mixing equipment, and a thermal model was prepared to design the specimen and cooling procedure. Materials systems studied were Ca-La, Cd-Ga and Al-Bi; evaluation of the processed samples included the morphology and electronic property measurements. The morphology was developed using optical and scanning electron microscopy and microprobe analyses. Electronic property characterization of the superconducting transition temperatures were made using an impedance change-tuned coil method.

  20. Data-driven RANS for simulations of large wind farms

    NASA Astrophysics Data System (ADS)

    Iungo, G. V.; Viola, F.; Ciri, U.; Rotea, M. A.; Leonardi, S.

    2015-06-01

    In the wind energy industry there is a growing need for real-time predictions of wind turbine wake flows in order to optimize power plant control and inhibit detrimental wake interactions. To this aim, a data-driven RANS approach is proposed in order to achieve very low computational costs and adequate accuracy through the data assimilation procedure. The RANS simulations are implemented with a classical Boussinesq hypothesis and a mixing length turbulence closure model, which is calibrated through the available data. High-fidelity LES simulations of a utility-scale wind turbine operating with different tip speed ratios are used as database. It is shown that the mixing length model for the RANS simulations can be calibrated accurately through the Reynolds stress of the axial and radial velocity components, and the gradient of the axial velocity in the radial direction. It is found that the mixing length is roughly invariant in the very near wake, then it increases linearly with the downstream distance in the diffusive region. The variation rate of the mixing length in the downstream direction is proposed as a criterion to detect the transition between near wake and transition region of a wind turbine wake. Finally, RANS simulations were performed with the calibrated mixing length model, and a good agreement with the LES simulations is observed.

  1. Differentiation of mixed biological traces in sexual assaults using DNA fragment analysis

    PubMed Central

    Apostolov, Аleksandar

    2014-01-01

    During the investigation of sexual abuse, it is not rare that mixed genetic material from two or more persons is detected. In such cases, successful profiling can be achieved using DNA fragment analysis, resulting in individual genetic profiles of offenders and their victims. This has led to an increase in the percentage of identified perpetrators of sexual offenses. The classic and modified genetic models used, allowed us to refine and implement appropriate extraction, polymerase chain reaction and electrophoretic procedures with individual assessment and approach to conducting research. Testing mixed biological traces using DNA fragment analysis appears to be the only opportunity for identifying perpetrators in gang rapes. PMID:26019514

  2. Study of thermodynamic properties of liquid binary alloys by a pseudopotential method

    NASA Astrophysics Data System (ADS)

    Vora, Aditya M.

    2010-11-01

    On the basis of the Percus-Yevick hard-sphere model as a reference system and the Gibbs-Bogoliubov inequality, a thermodynamic perturbation method is applied with the use of the well-known model potential. By applying a variational method, the hard-core diameters are found which correspond to a minimum free energy. With this procedure, the thermodynamic properties such as the internal energy, entropy, Helmholtz free energy, entropy of mixing, and heat of mixing are computed for liquid NaK binary systems. The influence of the local-field correction functions of Hartree, Taylor, Ichimaru-Utsumi, Farid-Heine-Engel-Robertson, and Sarkar-Sen-Haldar-Roy is also investigated. The computed excess entropy is in agreement with available experimental data in the case of liquid alloys, whereas the agreement for the heat of mixing is poor. This may be due to the sensitivity of the latter to the potential parameters and dielectric function.

  3. Improved estimation of sediment source contributions by concentration-dependent Bayesian isotopic mixing model

    NASA Astrophysics Data System (ADS)

    Ram Upadhayay, Hari; Bodé, Samuel; Griepentrog, Marco; Bajracharya, Roshan Man; Blake, Will; Cornelis, Wim; Boeckx, Pascal

    2017-04-01

    The implementation of compound-specific stable isotope (CSSI) analyses of biotracers (e.g. fatty acids, FAs) as constraints on sediment-source contributions has become increasingly relevant to understand the origin of sediments in catchments. The CSSI fingerprinting of sediment utilizes CSSI signature of biotracer as input in an isotopic mixing model (IMM) to apportion source soil contributions. So far source studies relied on the linear mixing assumptions of CSSI signature of sources to the sediment without accounting for potential effects of source biotracer concentration. Here we evaluated the effect of FAs concentration in sources on the accuracy of source contribution estimations in artificial soil mixture of three well-separated land use sources. Soil samples from land use sources were mixed to create three groups of artificial mixture with known source contributions. Sources and artificial mixture were analysed for δ13C of FAs using gas chromatography-combustion-isotope ratio mass spectrometry. The source contributions to the mixture were estimated using with and without concentration-dependent MixSIAR, a Bayesian isotopic mixing model. The concentration-dependent MixSIAR provided the closest estimates to the known artificial mixture source contributions (mean absolute error, MAE = 10.9%, and standard error, SE = 1.4%). In contrast, the concentration-independent MixSIAR with post mixing correction of tracer proportions based on aggregated concentration of FAs of sources biased the source contributions (MAE = 22.0%, SE = 3.4%). This study highlights the importance of accounting the potential effect of a source FA concentration for isotopic mixing in sediments that adds realisms to mixing model and allows more accurate estimates of contributions of sources to the mixture. The potential influence of FA concentration on CSSI signature of sediments is an important underlying factor that determines whether the isotopic signature of a given source is observable even after equilibrium. Therefore inclusion of FA concentrations of the sources in the IMM formulation is standard procedure for accurate estimation of source contributions. The post model correction approach that dominates the CSSI fingerprinting causes bias, especially if the FAs concentration of sources differs substantially.

  4. DNS and LES of a Shear-Free Mixing Layer

    NASA Technical Reports Server (NTRS)

    Knaepen, B.; Debliquy, O.; Carati, D.

    2003-01-01

    The purpose of this work is twofold. First, given the computational resources available today, it is possible to reach, using DNS, higher Reynolds numbers than in Briggs et al.. In the present study, the microscale Reynolds numbers reached in the low- and high-energy homogeneous regions are, respectively, 32 and 69. The results reported earlier can thus be complemented and their robustness in the presence of increased turbulence studied. The second aim of this work is to perform a detailed and documented LES of the shear-free mixing layer. In that respect, the creation of a DNS database at higher Reynolds number is necessary in order to make meaningful LES assessments. From the point of view of LES, the shear-free mixing-layer is interesting since it allows one to test how traditional LES models perform in the presence of an inhomogeneity without having to deal with difficult numerical issues. Indeed, as argued in Briggs et al., it is possible to use a spectral code to study the shear-free mixing layer and one can thus focus on the accuracy of the modelling while avoiding contamination of the results by commutation errors etc. This paper is organized as follows. First we detail the initialization procedure used in the simulation. Since the flow is not statistically stationary, this initialization procedure has a fairly strong influence on the evolution. Although we will focus here on the shear-free mixing layer, the method proposed in the present work can easily be used for other flows with one inhomogeneous direction. The next section of the article is devoted to the description of the DNS. All the relevant parameters are listed and comparison with the Veeravalli & Warhaft experiment is performed. The section on the LES of the shear-free mixing layer follows. A detailed comparison between the filtered DNS data and the LES predictions is presented. It is shown that simple eddy viscosity models perform very well for the present test case, most probably because the flow seems to be almost isotropic in the small-scale range that is not resolved by the LES.

  5. Influence of mixing procedure on robustness of self-consolidating concrete.

    DOT National Transportation Integrated Search

    2014-08-01

    Self-Consolidating Concrete is, in the fresh state, more sensitive to small variations in the constituent elements and the mixing : procedure compared to Conventional Vibrated Concrete. Several studies have been performed recently to identify robustn...

  6. Designing Recycled Hot Mix Asphalt Mixtures Using Superpave Technology

    DOT National Transportation Integrated Search

    1997-01-01

    Mix design procedures for recycled asphalt pavements require the selection of : virgin asphalt binder or recycling agent. This research project was undertaken : to develop a procedure for selecting the performance grade (PG) of virgin : asphalt binde...

  7. Constrained inference in mixed-effects models for longitudinal data with application to hearing loss.

    PubMed

    Davidov, Ori; Rosen, Sophia

    2011-04-01

    In medical studies, endpoints are often measured for each patient longitudinally. The mixed-effects model has been a useful tool for the analysis of such data. There are situations in which the parameters of the model are subject to some restrictions or constraints. For example, in hearing loss studies, we expect hearing to deteriorate with time. This means that hearing thresholds which reflect hearing acuity will, on average, increase over time. Therefore, the regression coefficients associated with the mean effect of time on hearing ability will be constrained. Such constraints should be accounted for in the analysis. We propose maximum likelihood estimation procedures, based on the expectation-conditional maximization either algorithm, to estimate the parameters of the model while accounting for the constraints on them. The proposed methods improve, in terms of mean square error, on the unconstrained estimators. In some settings, the improvement may be substantial. Hypotheses testing procedures that incorporate the constraints are developed. Specifically, likelihood ratio, Wald, and score tests are proposed and investigated. Their empirical significance levels and power are studied using simulations. It is shown that incorporating the constraints improves the mean squared error of the estimates and the power of the tests. These improvements may be substantial. The methodology is used to analyze a hearing loss study.

  8. Mixing the Green-Ampt model and Curve Number method as an empirical tool for rainfall excess estimation in small ungauged catchments.

    NASA Astrophysics Data System (ADS)

    Grimaldi, S.; Petroselli, A.; Romano, N.

    2012-04-01

    The Soil Conservation Service - Curve Number (SCS-CN) method is a popular rainfall-runoff model that is widely used to estimate direct runoff from small and ungauged basins. The SCS-CN is a simple and valuable approach to estimate the total stream-flow volume generated by a storm rainfall, but it was developed to be used with daily rainfall data. To overcome this drawback, we propose to include the Green-Ampt (GA) infiltration model into a mixed procedure, which is referred to as CN4GA (Curve Number for Green-Ampt), aiming to distribute in time the information provided by the SCS-CN method so as to provide estimation of sub-daily incremental rainfall excess. For a given storm, the computed SCS-CN total net rainfall amount is used to calibrate the soil hydraulic conductivity parameter of the Green-Ampt model. The proposed procedure was evaluated by analyzing 100 rainfall-runoff events observed in four small catchments of varying size. CN4GA appears an encouraging tool for predicting the net rainfall peak and duration values and has shown, at least for the test cases considered in this study, a better agreement with observed hydrographs than that of the classic SCS-CN method.

  9. Pressure ratio effects on self-similar scalar mixing of high-pressure turbulent jets in a pressurized volume

    NASA Astrophysics Data System (ADS)

    Ruggles, Adam; Pickett, Lyle; Frank, Jonathan

    2014-11-01

    Many real world combustion devices model fuel scalar mixing by assuming the self-similar argument established in atmospheric free jets. This allows simple prediction of the mean and rms fuel scalar fields to describe the mixing. This approach has been adopted in super critical liquid injections found in diesel engines where the liquid behaves as a dense fluid. The effect of pressure ratio (injection to ambient) when the ambient is greater than atmospheric pressure, upon the self-similar collapse has not been well characterized, particularly the effect upon mixing constants, jet spreading rates, and virtual origins. Changes in these self-similar parameters control the reproduction of the scalar mixing statistics. This experiment investigates the steady state mixing of high pressure ethylene jets in a pressurized pure nitrogen environment for various pressure ratios and jet orifice diameters. Quantitative laser Rayleigh scattering imaging was performed utilizing a calibration procedure to account for the pressure effects upon scattering interference within the high-pressure vessel.

  10. Critical review of ADOT's hot mix asphalt specifications : final report 630.

    DOT National Transportation Integrated Search

    2008-12-01

    The Arizona Department of Transportation (ADOT) has developed specifications and procedures to : ensure the quality of the hot mix asphalt materials purchased by the Department. The document : recording these specifications and procedures is the Stan...

  11. A 1H NMR-based metabolomics approach to evaluate the geographical authenticity of herbal medicine and its application in building a model effectively assessing the mixing proportion of intentional admixtures: A case study of Panax ginseng: Metabolomics for the authenticity of herbal medicine.

    PubMed

    Nguyen, Huy Truong; Lee, Dong-Kyu; Choi, Young-Geun; Min, Jung-Eun; Yoon, Sang Jun; Yu, Yun-Hyun; Lim, Johan; Lee, Jeongmi; Kwon, Sung Won; Park, Jeong Hill

    2016-05-30

    Ginseng, the root of Panax ginseng has long been the subject of adulteration, especially regarding its origins. Here, 60 ginseng samples from Korea and China initially displayed similar genetic makeup when investigated by DNA-based technique with 23 chloroplast intergenic space regions. Hence, (1)H NMR-based metabolomics with orthogonal projections on the latent structure-discrimination analysis (OPLS-DA) were applied and successfully distinguished between samples from two countries using seven primary metabolites as discrimination markers. Furthermore, to recreate adulteration in reality, 21 mixed samples of numerous Korea/China ratios were tested with the newly built OPLS-DA model. The results showed satisfactory separation according to the proportion of mixing. Finally, a procedure for assessing mixing proportion of intentionally blended samples that achieved good predictability (adjusted R(2)=0.8343) was constructed, thus verifying its promising application to quality control of herbal foods by pointing out the possible mixing ratio of falsified samples. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Potentials of Mean Force With Ab Initio Mixed Hamiltonian Models of Solvation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dupuis, Michel; Schenter, Gregory K.; Garrett, Bruce C.

    2003-08-01

    We give an account of a computationally tractable and efficient procedure for the calculation of potentials of mean force using mixed Hamiltonian models of electronic structure where quantum subsystems are described with computationally intensive ab initio wavefunctions. The mixed Hamiltonian is mapped into an all-classical Hamiltonian that is amenable to a thermodynamic perturbation treatment for the calculation of free energies. A small number of statistically uncorrelated (solute-solvent) configurations are selected from the Monte Carlo random walk generated with the all-classical Hamiltonian approximation. Those are used in the averaging of the free energy using the mixed quantum/classical Hamiltonian. The methodology ismore » illustrated for the micro-solvated SN2 substitution reaction of methyl chloride by hydroxide. We also compare the potential of mean force calculated with the above protocol with an approximate formalism, one in which the potential of mean force calculated with the all-classical Hamiltonian is simply added to the energy of the isolated (non-solvated) solute along the reaction path. Interestingly the latter approach is found to be in semi-quantitative agreement with the full mixed Hamiltonian approximation.« less

  13. Development of a diagnosis- and procedure-based risk model for 30-day outcome after pediatric cardiac surgery.

    PubMed

    Crowe, Sonya; Brown, Kate L; Pagel, Christina; Muthialu, Nagarajan; Cunningham, David; Gibbs, John; Bull, Catherine; Franklin, Rodney; Utley, Martin; Tsang, Victor T

    2013-05-01

    The study objective was to develop a risk model incorporating diagnostic information to adjust for case-mix severity during routine monitoring of outcomes for pediatric cardiac surgery. Data from the Central Cardiac Audit Database for all pediatric cardiac surgery procedures performed in the United Kingdom between 2000 and 2010 were included: 70% for model development and 30% for validation. Units of analysis were 30-day episodes after the first surgical procedure. We used logistic regression for 30-day mortality. Risk factors considered included procedural information based on Central Cardiac Audit Database "specific procedures," diagnostic information defined by 24 "primary" cardiac diagnoses and "univentricular" status, and other patient characteristics. Of the 27,140 30-day episodes in the development set, 25,613 were survivals, 834 were deaths, and 693 were of unknown status (mortality, 3.2%). The risk model includes procedure, cardiac diagnosis, univentricular status, age band (neonate, infant, child), continuous age, continuous weight, presence of non-Down syndrome comorbidity, bypass, and year of operation 2007 or later (because of decreasing mortality). A risk score was calculated for 95% of cases in the validation set (weight missing in 5%). The model discriminated well; the C-index for validation set was 0.77 (0.81 for post-2007 data). Removal of all but procedural information gave a reduced C-index of 0.72. The model performed well across the spectrum of predicted risk, but there was evidence of underestimation of mortality risk in neonates undergoing operation from 2007. The risk model performs well. Diagnostic information added useful discriminatory power. A future application is risk adjustment during routine monitoring of outcomes in the United Kingdom to assist quality assurance. Copyright © 2013 The American Association for Thoracic Surgery. Published by Mosby, Inc. All rights reserved.

  14. Drug awareness in adolescents attending a mental health service: analysis of longitudinal data.

    PubMed

    Arnau, Jaume; Bono, Roser; Díaz, Rosa; Goti, Javier

    2011-11-01

    One of the procedures used most recently with longitudinal data is linear mixed models. In the context of health research the increasing number of studies that now use these models bears witness to the growing interest in this type of analysis. This paper describes the application of linear mixed models to a longitudinal study of a sample of Spanish adolescents attending a mental health service, the aim being to investigate their knowledge about the consumption of alcohol and other drugs. More specifically, the main objective was to compare the efficacy of a motivational interviewing programme with a standard approach to drug awareness. The models used to analyse the overall indicator of drug awareness were as follows: (a) unconditional linear growth curve model; (b) growth model with subject-associated variables; and (c) individual curve model with predictive variables. The results showed that awareness increased over time and that the variable 'schooling years' explained part of the between-subjects variation. The effect of motivational interviewing was also significant.

  15. Neutral kaon mixing beyond the Standard Model with n f = 2 + 1 chiral fermions. Part 2: non perturbative renormalisation of the ΔF = 2 four-quark operators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boyle, Peter A.; Garron, Nicolas; Hudspith, Renwick J.

    We compute the renormalisation factors (Z-matrices) of the ΔF = 2 four-quark operators needed for Beyond the Standard Model (BSM) kaon mixing. We work with nf = 2+1 flavours of Domain-Wall fermions whose chiral-flavour properties are essential to maintain a continuum-like mixing pattern. We introduce new RI-SMOM renormalisation schemes, which we argue are better behaved compared to the commonly-used corresponding RI-MOM one. We find that, once converted to MS¯, the Z-factors computed through these RI-SMOM schemes are in good agreement but differ significantly from the ones computed through the RI-MOM scheme. The RI-SMOM Z-factors presented here have been used tomore » compute the BSM neutral kaon mixing matrix elements in the companion paper. In conclusion, we argue that the renormalisation procedure is responsible for the discrepancies observed by different collaborations, we will investigate and elucidate the origin of these differences throughout this work.« less

  16. Functional mixed effects spectral analysis

    PubMed Central

    KRAFTY, ROBERT T.; HALL, MARTICA; GUO, WENSHENG

    2011-01-01

    SUMMARY In many experiments, time series data can be collected from multiple units and multiple time series segments can be collected from the same unit. This article introduces a mixed effects Cramér spectral representation which can be used to model the effects of design covariates on the second-order power spectrum while accounting for potential correlations among the time series segments collected from the same unit. The transfer function is composed of a deterministic component to account for the population-average effects and a random component to account for the unit-specific deviations. The resulting log-spectrum has a functional mixed effects representation where both the fixed effects and random effects are functions in the frequency domain. It is shown that, when the replicate-specific spectra are smooth, the log-periodograms converge to a functional mixed effects model. A data-driven iterative estimation procedure is offered for the periodic smoothing spline estimation of the fixed effects, penalized estimation of the functional covariance of the random effects, and unit-specific random effects prediction via the best linear unbiased predictor. PMID:26855437

  17. Neutral kaon mixing beyond the Standard Model with n f = 2 + 1 chiral fermions. Part 2: non perturbative renormalisation of the ΔF = 2 four-quark operators

    DOE PAGES

    Boyle, Peter A.; Garron, Nicolas; Hudspith, Renwick J.; ...

    2017-10-10

    We compute the renormalisation factors (Z-matrices) of the ΔF = 2 four-quark operators needed for Beyond the Standard Model (BSM) kaon mixing. We work with nf = 2+1 flavours of Domain-Wall fermions whose chiral-flavour properties are essential to maintain a continuum-like mixing pattern. We introduce new RI-SMOM renormalisation schemes, which we argue are better behaved compared to the commonly-used corresponding RI-MOM one. We find that, once converted to MS¯, the Z-factors computed through these RI-SMOM schemes are in good agreement but differ significantly from the ones computed through the RI-MOM scheme. The RI-SMOM Z-factors presented here have been used tomore » compute the BSM neutral kaon mixing matrix elements in the companion paper. In conclusion, we argue that the renormalisation procedure is responsible for the discrepancies observed by different collaborations, we will investigate and elucidate the origin of these differences throughout this work.« less

  18. Using mixed methods effectively in prevention science: designs, procedures, and examples.

    PubMed

    Zhang, Wanqing; Watanabe-Galloway, Shinobu

    2014-10-01

    There is growing interest in using a combination of quantitative and qualitative methods to generate evidence about the effectiveness of health prevention, services, and intervention programs. With the emerging importance of mixed methods research across the social and health sciences, there has been an increased recognition of the value of using mixed methods for addressing research questions in different disciplines. We illustrate the mixed methods approach in prevention research, showing design procedures used in several published research articles. In this paper, we focused on two commonly used mixed methods designs: concurrent and sequential mixed methods designs. We discuss the types of mixed methods designs, the reasons for, and advantages of using a particular type of design, and the procedures of qualitative and quantitative data collection and integration. The studies reviewed in this paper show that the essence of qualitative research is to explore complex dynamic phenomena in prevention science, and the advantage of using mixed methods is that quantitative data can yield generalizable results and qualitative data can provide extensive insights. However, the emphasis of methodological rigor in a mixed methods application also requires considerable expertise in both qualitative and quantitative methods. Besides the necessary skills and effective interdisciplinary collaboration, this combined approach also requires an open-mindedness and reflection from the involved researchers.

  19. Development of a base mix design procedure.

    DOT National Transportation Integrated Search

    1974-01-01

    This paper reports an investigation into the development of a compaction procedure for base mixes containing aggregates of 1 1/2" (38 mm) maximum size. The specimens were made in a manner similar to that given in ASTM Designation 1561-71, except that...

  20. Effects of mixing procedure itself on the structure, viscosity, and spreadability of white petrolatum and salicylic acid ointment and the skin permeation of salicylic acid.

    PubMed

    Kitagawa, Shuji; Fujiwara, Megumi; Okinaka, Yuta; Yutani, Reiko; Teraoka, Reiko

    2015-01-01

    White petrolatum is a mixture of solid and liquid hydrocarbons and its structure can be affected by shear stress. Thus, it might also induce changes in its rheological properties. In this study, we used polarization microscopy to investigate how different mixing methods affect the structure of white petrolatum. We used two different mixing methods, mixing using a rotation/revolution mixer and mixing using an ointment slab and an ointment spatula. The extent of the fragmentation and dispersal of the solid portion of white petrolatum depended on the mixing conditions. Next, we examined the changes in the structure of a salicylic acid ointment, in which white petrolatum was used as a base, induced by mixing and found that the salicylic acid solids within the ointment were also dispersed. In addition to these structural changes, the viscosity and thixotropic behavior of both test substances also decreased in a mixing condition-dependent manner. The reductions in these parameters were most marked after mixing with a rotation/revolution mixer, and similar results were obtained for spreadability. We also investigated the effects of mixing procedure on the skin accumulation and permeation of salicylic acid. They were increased by approximately three-fold after mixing. Little difference in skin accumulation or permeation was detected between the two mixing methods. These findings indicate that mixing procedures themselves affect the utility and physiological effects of white petrolatum-based ointments. Therefore, these effects should be considered when mixing is required for the clinical use of petrolatum-based ointments.

  1. On testing an unspecified function through a linear mixed effects model with multiple variance components

    PubMed Central

    Wang, Yuanjia; Chen, Huaihou

    2012-01-01

    Summary We examine a generalized F-test of a nonparametric function through penalized splines and a linear mixed effects model representation. With a mixed effects model representation of penalized splines, we imbed the test of an unspecified function into a test of some fixed effects and a variance component in a linear mixed effects model with nuisance variance components under the null. The procedure can be used to test a nonparametric function or varying-coefficient with clustered data, compare two spline functions, test the significance of an unspecified function in an additive model with multiple components, and test a row or a column effect in a two-way analysis of variance model. Through a spectral decomposition of the residual sum of squares, we provide a fast algorithm for computing the null distribution of the test, which significantly improves the computational efficiency over bootstrap. The spectral representation reveals a connection between the likelihood ratio test (LRT) in a multiple variance components model and a single component model. We examine our methods through simulations, where we show that the power of the generalized F-test may be higher than the LRT, depending on the hypothesis of interest and the true model under the alternative. We apply these methods to compute the genome-wide critical value and p-value of a genetic association test in a genome-wide association study (GWAS), where the usual bootstrap is computationally intensive (up to 108 simulations) and asymptotic approximation may be unreliable and conservative. PMID:23020801

  2. On testing an unspecified function through a linear mixed effects model with multiple variance components.

    PubMed

    Wang, Yuanjia; Chen, Huaihou

    2012-12-01

    We examine a generalized F-test of a nonparametric function through penalized splines and a linear mixed effects model representation. With a mixed effects model representation of penalized splines, we imbed the test of an unspecified function into a test of some fixed effects and a variance component in a linear mixed effects model with nuisance variance components under the null. The procedure can be used to test a nonparametric function or varying-coefficient with clustered data, compare two spline functions, test the significance of an unspecified function in an additive model with multiple components, and test a row or a column effect in a two-way analysis of variance model. Through a spectral decomposition of the residual sum of squares, we provide a fast algorithm for computing the null distribution of the test, which significantly improves the computational efficiency over bootstrap. The spectral representation reveals a connection between the likelihood ratio test (LRT) in a multiple variance components model and a single component model. We examine our methods through simulations, where we show that the power of the generalized F-test may be higher than the LRT, depending on the hypothesis of interest and the true model under the alternative. We apply these methods to compute the genome-wide critical value and p-value of a genetic association test in a genome-wide association study (GWAS), where the usual bootstrap is computationally intensive (up to 10(8) simulations) and asymptotic approximation may be unreliable and conservative. © 2012, The International Biometric Society.

  3. Impact of Lead Time and Safety Factor in Mixed Inventory Models with Backorder Discounts

    NASA Astrophysics Data System (ADS)

    Lo, Ming-Cheng; Chao-Hsien Pan, Jason; Lin, Kai-Cing; Hsu, Jia-Wei

    This study investigates the impact of safety factor on the continuous review inventory model involving controllable lead time with mixture of backorder discount and partial lost sales. The objective is to minimize the expected total annual cost with respect to order quantity, backorder price discount, safety factor and lead time. A model with normal demand is also discussed. Numerical examples are presented to illustrate the procedures of the algorithms and the effects of parameters on the result of the proposed models are analyzed.

  4. A Methodology for Conducting Integrative Mixed Methods Research and Data Analyses

    PubMed Central

    Castro, Felipe González; Kellison, Joshua G.; Boyd, Stephen J.; Kopak, Albert

    2011-01-01

    Mixed methods research has gained visibility within the last few years, although limitations persist regarding the scientific caliber of certain mixed methods research designs and methods. The need exists for rigorous mixed methods designs that integrate various data analytic procedures for a seamless transfer of evidence across qualitative and quantitative modalities. Such designs can offer the strength of confirmatory results drawn from quantitative multivariate analyses, along with “deep structure” explanatory descriptions as drawn from qualitative analyses. This article presents evidence generated from over a decade of pilot research in developing an integrative mixed methods methodology. It presents a conceptual framework and methodological and data analytic procedures for conducting mixed methods research studies, and it also presents illustrative examples from the authors' ongoing integrative mixed methods research studies. PMID:22167325

  5. Bright, dark, and mixed vector soliton solutions of the general coupled nonlinear Schrödinger equations.

    PubMed

    Agalarov, Agalar; Zhulego, Vladimir; Gadzhimuradov, Telman

    2015-04-01

    The reduction procedure for the general coupled nonlinear Schrödinger (GCNLS) equations with four-wave mixing terms is proposed. It is shown that the GCNLS system is equivalent to the well known integrable families of the Manakov and Makhankov U(n,m)-vector models. This equivalence allows us to construct bright-bright and dark-dark solitons and a quasibreather-dark solution with unconventional dynamics: the density of the first component oscillates in space and time, whereas the density of the second component does not. The collision properties of solitons are also studied.

  6. Contrail Formation in Aircraft Wakes Using Large-Eddy Simulations

    NASA Technical Reports Server (NTRS)

    Paoli, R.; Helie, J.; Poinsot, T. J.; Ghosal, S.

    2002-01-01

    In this work we analyze the issue of the formation of condensation trails ("contrails") in the near-field of an aircraft wake. The basic configuration consists in an exhaust engine jet interacting with a wing-tip training vortex. The procedure adopted relies on a mixed Eulerian/Lagrangian two-phase flow approach; a simple micro-physics model for ice growth has been used to couple ice and vapor phases. Large eddy simulations have carried out at a realistic flight Reynolds number to evaluate the effects of turbulent mixing and wake vortex dynamics on ice-growth characteristics and vapor thermodynamic properties.

  7. Optimal Controller Design for the Microgravity Isolation Mount (MIM)

    NASA Technical Reports Server (NTRS)

    Hampton, R. David

    1998-01-01

    H2 controllers, when designed using an appropriate design model and carefully chosen frequency weightings, appear to provide robust performance and robust stability for Microgravity Isolation Mount (MIM). The STS-85 flight data will be used to evaluate the H2 controllers' performance on the actual hardware under working conditions. Next, full-order H-infinity controllers will be developed, as an intermediate step, in order to determine appropriate H-infinity performance weights for use in the mixed-norm design. Finally the basic procedure outlined above will be used to develop fixed-order mixed-norm controllers for MIM.

  8. Mixing medication into foodstuffs: identifying the issues for paediatric nurses.

    PubMed

    Akram, Gazala; Mullen, Alex B

    2015-04-01

    Medication is often mixed into soft foods to aid swallowing in children. However, this can alter the physical/chemical properties of the active drug. This study reports on the prevalence of the modification procedure, the nature of foodstuffs routinely used and factors which influence how the procedure is performed by nurses working in the National Health Service in Scotland. Mixed methods were employed encompassing an online self-administered questionnaire and semi-structured interviews. One hundred and eleven nurses participated, of whom 87% had modified medication prior to administration. Fruit juice (diluted and concentrated) and yoghurts were most commonly used. The interviews (i) identified the limitations of the procedure; (ii) explored the decision-making process; and (iii) confirmed the procedure was a last resort. This study intends to address some of the uncertainty surrounding the medicine modification procedure within the paediatric population. © 2013 Wiley Publishing Asia Pty Ltd.

  9. Determination of the in-place hot-mix asphalt layer modulus for rehabilitation projects using a mechanistic-empirical procedure.

    DOT National Transportation Integrated Search

    2006-01-01

    This project evaluated the procedures proposed by the Mechanistic-Empirical Pavement Design Guide (MEPDG) to characterize existing hot-mix asphalt (HMA) layers for rehabilitation purposes. Thirty-three cores were extracted from nine sites in Virginia...

  10. Development of test procedure for the design of black base : final report.

    DOT National Transportation Integrated Search

    1976-01-01

    There is no standard design procedure available for black base mixes containing aggregates larger than 1" (25.4 mm) This investigation dealt with the use of stability testing equipment similar to that used in the design of surface mixes and developme...

  11. Stochastic modeling of filtrate alkalinity in water filtration devices: Transport through micro/nano porous clay based ceramic materials

    USDA-ARS?s Scientific Manuscript database

    Clay and plant materials such as wood are the raw materials used in manufacture of ceramic water filtration devices around the world. A step by step manufacturing procedure which includes initial mixing, molding and sintering is used. The manufactured ceramic filters have numerous pores which help i...

  12. Sample selection in foreign similarity regions for multicrop experiments

    NASA Technical Reports Server (NTRS)

    Malin, J. T. (Principal Investigator)

    1981-01-01

    The selection of sample segments in the U.S. foreign similarity regions for development of proportion estimation procedures and error modeling for Argentina, Australia, Brazil, and USSR in AgRISTARS is described. Each sample was chosen to be similar in crop mix to the corresponding indicator region sample. Data sets, methods of selection, and resulting samples are discussed.

  13. Validity of diagnoses, procedures, and laboratory data in Japanese administrative data.

    PubMed

    Yamana, Hayato; Moriwaki, Mutsuko; Horiguchi, Hiromasa; Kodan, Mariko; Fushimi, Kiyohide; Yasunaga, Hideo

    2017-10-01

    Validation of recorded data is a prerequisite for studies that utilize administrative databases. The present study evaluated the validity of diagnoses and procedure records in the Japanese Diagnosis Procedure Combination (DPC) data, along with laboratory test results in the newly-introduced Standardized Structured Medical Record Information Exchange (SS-MIX) data. Between November 2015 and February 2016, we conducted chart reviews of 315 patients hospitalized between April 2014 and March 2015 in four middle-sized acute-care hospitals in Shizuoka, Kochi, Fukuoka, and Saga Prefectures and used them as reference standards. The sensitivity and specificity of DPC data in identifying 16 diseases and 10 common procedures were identified. The accuracy of SS-MIX data for 13 laboratory test results was also examined. The specificity of diagnoses in the DPC data exceeded 96%, while the sensitivity was below 50% for seven diseases and variable across diseases. When limited to primary diagnoses, the sensitivity and specificity were 78.9% and 93.2%, respectively. The sensitivity of procedure records exceeded 90% for six procedures, and the specificity exceeded 90% for nine procedures. Agreement between the SS-MIX data and the chart reviews was above 95% for all 13 items. The validity of diagnoses and procedure records in the DPC data and laboratory results in the SS-MIX data was high in general, supporting their use in future studies. Copyright © 2017 The Authors. Production and hosting by Elsevier B.V. All rights reserved.

  14. Tolerability of an equimolar mix of nitrous oxide and oxygen during painful procedures in very elderly patients.

    PubMed

    Bauer, Carole; Lahjibi-Paulet, Hayat; Somme, Dominique; Onody, Peter; Saint Jean, Olivier; Gisselbrecht, Mathilde

    2007-01-01

    Although an equimolar mix of nitrous oxide-oxygen (N(2)O/O(2)) [Kalinox] is widely used as an analgesic, there have been few specific studies of this product in the elderly. In this article, we investigate the tolerability of this equimolar mix in very elderly patients undergoing painful procedures. This was a prospective, observational study of patients hospitalised in the geriatric short-stay unit of a teaching hospital between July 2001 and September 2003. All patients aged >/=80 years who were scheduled for invasive care procedures were eligible for inclusion. Sixty-two patients were recruited and underwent a total of 68 procedures. The procedures were divided into four classes based on the degree of pain they were expected to cause and their duration. Patients received the equimolar N(2)O/O(2) mix (Kalinox) for 5 minutes before the beginning of the procedure and throughout its duration. The inhaled treatment was administered via a high-concentration mask. Assessments were carried out during the inhalation and over the 15 minute period following inhalation. The primary endpoint of the study was tolerability of the equimolar N(2)O/O(2) mix, and all adverse events were recorded. Secondary endpoints were the efficacy of the product (assessed on a verbal rating scale and/or the Doloplus scale), its ease of use and its acceptability to patients and staff. Fourteen patients (22.6%) each reported at least one adverse event: impaired hearing (n = 1), altered perception of the environment (n = 8), anxiety (n = 1), headache (n = 3) and drowsiness at the end of the procedure (n = 2). All these disorders subsided rapidly after treatment was completed. This study shows the favourable tolerability of the equimolar N(2)O/O(2) mix in very elderly subjects, which makes this product a valuable tool for the management of acute pain in this age group.

  15. 3D mapping, hydrodynamics and modelling of the freshwater-brine mixing zone in salt flats similar to the Salar de Atacama (Chile)

    NASA Astrophysics Data System (ADS)

    Marazuela, M. A.; Vázquez-Suñé, E.; Custodio, E.; Palma, T.; García-Gil, A.; Ayora, C.

    2018-06-01

    Salt flat brines are a major source of minerals and especially lithium. Moreover, valuable wetlands with delicate ecologies are also commonly present at the margins of salt flats. Therefore, the efficient and sustainable exploitation of the brines they contain requires detailed knowledge about the hydrogeology of the system. A critical issue is the freshwater-brine mixing zone, which develops as a result of the mass balance between the recharged freshwater and the evaporating brine. The complex processes occurring in salt flats require a three-dimensional (3D) approach to assess the mixing zone geometry. In this study, a 3D map of the mixing zone in a salt flat is presented, using the Salar de Atacama as an example. This mapping procedure is proposed as the basis of computationally efficient three-dimensional numerical models, provided that the hydraulic heads of freshwater and mixed waters are corrected based on their density variations to convert them into brine heads. After this correction, the locations of lagoons and wetlands that are characteristic of the marginal zones of the salt flats coincide with the regional minimum water (brine) heads. The different morphologies of the mixing zone resulting from this 3D mapping have been interpreted using a two-dimensional (2D) flow and transport numerical model of an idealized cross-section of the mixing zone. The result of the model shows a slope of the mixing zone that is similar to that obtained by 3D mapping and lower than in previous models. To explain this geometry, the 2D model was used to evaluate the effects of heterogeneity in the mixing zone geometry. The higher the permeability of the upper aquifer is, the lower the slope and the shallower the mixing zone become. This occurs because most of the freshwater lateral recharge flows through the upper aquifer due to its much higher transmissivity, thus reducing the freshwater head. The presence of a few meters of highly permeable materials in the upper part of these hydrogeological systems, such as alluvial fans or karstified evaporites that are frequently associated with the salt flats, is enough to greatly modify the geometry of the saline interface.

  16. Correlation of hospital magnet status with the quality of physicians performing neurosurgical procedures in New York State.

    PubMed

    Bekelis, Kimon; Missios, Symeon; MacKenzie, Todd A

    2018-01-24

    The quality of physicians practicing in hospitals recognized for nursing excellence by the American Nurses Credentialing Center has not been studied before. We investigated whether Magnet hospital recognition is associated with higher quality of physicians performing neurosurgical procedures. We performed a cohort study of patients undergoing neurosurgical procedures from 2009-2013, who were registered in the New York Statewide Planning and Research Cooperative System (SPARCS) database. Propensity score adjusted multivariable regression models were used to adjust for known confounders, with mixed effects methods to control for clustering at the facility level. An instrumental variable analysis was used to control for unmeasured confounding and simulate the effect of a randomized trial. During the study period, 185,277 patients underwent neurosurgical procedures, and met the inclusion criteria. Of these, 66,607 (35.6%) were hospitalized in Magnet hospitals, and 118,670 (64.4%) in non-Magnet institutions. Instrumental variable analysis demonstrated that undergoing neurosurgical operations in Magnet hospitals was associated with a 13.6% higher chance of being treated by a physician with superior performance in terms of mortality (95% CI, 13.2% to 14.1%), and a 4.3% higher chance of being treated by a physician with superior performance in terms of length-of-stay (LOS) (95% CI, 3.8% to 4.7%) in comparison to non-Magnet institutions. The same associations were present in propensity score adjusted mixed effects models. Using a comprehensive all-payer cohort of neurosurgical patients in New York State we identified an association of Magnet hospital recognition with superior physician performance.

  17. Stochastic models to study the impact of mixing on a fed-batch culture of Saccharomyces cerevisiae.

    PubMed

    Delvigne, F; Lejeune, A; Destain, J; Thonart, P

    2006-01-01

    The mechanisms of interaction between microorganisms and their environment in a stirred bioreactor can be modeled by a stochastic approach. The procedure comprises two submodels: a classical stochastic model for the microbial cell circulation and a Markov chain model for the concentration gradient calculus. The advantage lies in the fact that the core of each submodel, i.e., the transition matrix (which contains the probabilities to shift from a perfectly mixed compartment to another in the bioreactor representation), is identical for the two cases. That means that both the particle circulation and fluid mixing process can be analyzed by use of the same modeling basis. This assumption has been validated by performing inert tracer (NaCl) and stained yeast cells dispersion experiments that have shown good agreement with simulation results. The stochastic model has been used to define a characteristic concentration profile experienced by the microorganisms during a fermentation test performed in a scale-down reactor. The concentration profiles obtained in this way can explain the scale-down effect in the case of a Saccharomyces cerevisiae fed-batch process. The simulation results are analyzed in order to give some explanations about the effect of the substrate fluctuation dynamics on S. cerevisiae.

  18. Decision Support for the Capacity Management of Bronchoscopy Devices: Optimizing the Cost-Efficient Mix of Reusable and Single-Use Devices Through Mathematical Modeling.

    PubMed

    Edenharter, Günther M; Gartner, Daniel; Pförringer, Dominik

    2017-06-01

    Increasing costs of material resources challenge hospitals to stay profitable. Particularly in anesthesia departments and intensive care units, bronchoscopes are used for various indications. Inefficient management of single- and multiple-use systems can influence the hospitals' material costs substantially. Using mathematical modeling, we developed a strategic decision support tool to determine the optimum mix of disposable and reusable bronchoscopy devices in the setting of an intensive care unit. A mathematical model with the objective to minimize costs in relation to demand constraints for bronchoscopy devices was formulated. The stochastic model decides whether single-use, multi-use, or a strategically chosen mix of both device types should be used. A decision support tool was developed in which parameters for uncertain demand such as mean, standard deviation, and a reliability parameter can be inserted. Furthermore, reprocessing costs per procedure, procurement, and maintenance costs for devices can be parameterized. Our experiments show for which demand pattern and reliability measure, it is efficient to only use reusable or disposable devices and under which circumstances the combination of both device types is beneficial. To determine the optimum mix of single-use and reusable bronchoscopy devices effectively and efficiently, managers can enter their hospital-specific parameters such as demand and prices into the decision support tool.The software can be downloaded at: https://github.com/drdanielgartner/bronchomix/.

  19. Bayesian Covariate Selection in Mixed-Effects Models For Longitudinal Shape Analysis

    PubMed Central

    Muralidharan, Prasanna; Fishbaugh, James; Kim, Eun Young; Johnson, Hans J.; Paulsen, Jane S.; Gerig, Guido; Fletcher, P. Thomas

    2016-01-01

    The goal of longitudinal shape analysis is to understand how anatomical shape changes over time, in response to biological processes, including growth, aging, or disease. In many imaging studies, it is also critical to understand how these shape changes are affected by other factors, such as sex, disease diagnosis, IQ, etc. Current approaches to longitudinal shape analysis have focused on modeling age-related shape changes, but have not included the ability to handle covariates. In this paper, we present a novel Bayesian mixed-effects shape model that incorporates simultaneous relationships between longitudinal shape data and multiple predictors or covariates to the model. Moreover, we place an Automatic Relevance Determination (ARD) prior on the parameters, that lets us automatically select which covariates are most relevant to the model based on observed data. We evaluate our proposed model and inference procedure on a longitudinal study of Huntington's disease from PREDICT-HD. We first show the utility of the ARD prior for model selection in a univariate modeling of striatal volume, and next we apply the full high-dimensional longitudinal shape model to putamen shapes. PMID:28090246

  20. Mixed reality for robotic treatment of a splenic artery aneurysm.

    PubMed

    Pietrabissa, Andrea; Morelli, Luca; Ferrari, Mauro; Peri, Andrea; Ferrari, Vincenzo; Moglia, Andrea; Pugliese, Luigi; Guarracino, Fabio; Mosca, Franco

    2010-05-01

    Techniques of mixed reality can successfully be used in preoperative planning of laparoscopic and robotic procedures and to guide surgical dissection and enhance its accuracy. A computer-generated three-dimensional (3D) model of the vascular anatomy of the spleen was obtained from the computed tomography (CT) dataset of a patient with a 3-cm splenic artery aneurysm. Using an environmental infrared localizer and a stereoscopic helmet, the surgeon can see the patient's anatomy in transparency (augmented or mixed reality). This arrangement simplifies correct positioning of trocars and locates surgical dissection directly on top of the aneurysm. In this way the surgeon limits unnecessary dissection, leaving intact the blood supply from the short gastric vessels and other collaterals. Based on preoperative planning, we were able to anticipate that the vascular exclusion of the aneurysm would result in partial splenic ischemia. To re-establish the flow to the spleen, end-to-end robotic anastomosis of the splenic artery with the Da Vinci surgical system was then performed. Finally, the aneurysm was fenestrated to exclude arterial refilling. The postoperative course was uneventful. A control CT scan 4 weeks after surgery showed a well-perfused and homogeneous splenic parenchyma. The final 3D model showed the fenestrated calcified aneurysm and patency of the re-anastomosed splenic artery. The described technique of robotic vascular exclusion of a splenic artery aneurysm, followed by re-anastomosis of the vessel, clearly demonstrates how this technology can reduce the invasiveness of the procedure, obviating an otherwise necessary splenectomy. Also, the use of intraoperative mixed-reality technology proved very useful in this case and is expected to play an increasing role in the operating room of the future.

  1. A Unified Development of Basis Reduction Methods for Rotor Blade Analysis

    NASA Technical Reports Server (NTRS)

    Ruzicka, Gene C.; Hodges, Dewey H.; Rutkowski, Michael (Technical Monitor)

    2001-01-01

    The axial foreshortening effect plays a key role in rotor blade dynamics, but approximating it accurately in reduced basis models has long posed a difficult problem for analysts. Recently, though, several methods have been shown to be effective in obtaining accurate,reduced basis models for rotor blades. These methods are the axial elongation method,the mixed finite element method, and the nonlinear normal mode method. The main objective of this paper is to demonstrate the close relationships among these methods, which are seemingly disparate at first glance. First, the difficulties inherent in obtaining reduced basis models of rotor blades are illustrated by examining the modal reduction accuracy of several blade analysis formulations. It is shown that classical, displacement-based finite elements are ill-suited for rotor blade analysis because they can't accurately represent the axial strain in modal space, and that this problem may be solved by employing the axial force as a variable in the analysis. It is shown that the mixed finite element method is a convenient means for accomplishing this, and the derivation of a mixed finite element for rotor blade analysis is outlined. A shortcoming of the mixed finite element method is that is that it increases the number of variables in the analysis. It is demonstrated that this problem may be rectified by solving for the axial displacements in terms of the axial forces and the bending displacements. Effectively, this procedure constitutes a generalization of the widely used axial elongation method to blades of arbitrary topology. The procedure is developed first for a single element, and then extended to an arbitrary assemblage of elements of arbitrary type. Finally, it is shown that the generalized axial elongation method is essentially an approximate solution for an invariant manifold that can be used as the basis for a nonlinear normal mode.

  2. Assessing Discriminative Performance at External Validation of Clinical Prediction Models

    PubMed Central

    Nieboer, Daan; van der Ploeg, Tjeerd; Steyerberg, Ewout W.

    2016-01-01

    Introduction External validation studies are essential to study the generalizability of prediction models. Recently a permutation test, focusing on discrimination as quantified by the c-statistic, was proposed to judge whether a prediction model is transportable to a new setting. We aimed to evaluate this test and compare it to previously proposed procedures to judge any changes in c-statistic from development to external validation setting. Methods We compared the use of the permutation test to the use of benchmark values of the c-statistic following from a previously proposed framework to judge transportability of a prediction model. In a simulation study we developed a prediction model with logistic regression on a development set and validated them in the validation set. We concentrated on two scenarios: 1) the case-mix was more heterogeneous and predictor effects were weaker in the validation set compared to the development set, and 2) the case-mix was less heterogeneous in the validation set and predictor effects were identical in the validation and development set. Furthermore we illustrated the methods in a case study using 15 datasets of patients suffering from traumatic brain injury. Results The permutation test indicated that the validation and development set were homogenous in scenario 1 (in almost all simulated samples) and heterogeneous in scenario 2 (in 17%-39% of simulated samples). Previously proposed benchmark values of the c-statistic and the standard deviation of the linear predictors correctly pointed at the more heterogeneous case-mix in scenario 1 and the less heterogeneous case-mix in scenario 2. Conclusion The recently proposed permutation test may provide misleading results when externally validating prediction models in the presence of case-mix differences between the development and validation population. To correctly interpret the c-statistic found at external validation it is crucial to disentangle case-mix differences from incorrect regression coefficients. PMID:26881753

  3. Assessing Discriminative Performance at External Validation of Clinical Prediction Models.

    PubMed

    Nieboer, Daan; van der Ploeg, Tjeerd; Steyerberg, Ewout W

    2016-01-01

    External validation studies are essential to study the generalizability of prediction models. Recently a permutation test, focusing on discrimination as quantified by the c-statistic, was proposed to judge whether a prediction model is transportable to a new setting. We aimed to evaluate this test and compare it to previously proposed procedures to judge any changes in c-statistic from development to external validation setting. We compared the use of the permutation test to the use of benchmark values of the c-statistic following from a previously proposed framework to judge transportability of a prediction model. In a simulation study we developed a prediction model with logistic regression on a development set and validated them in the validation set. We concentrated on two scenarios: 1) the case-mix was more heterogeneous and predictor effects were weaker in the validation set compared to the development set, and 2) the case-mix was less heterogeneous in the validation set and predictor effects were identical in the validation and development set. Furthermore we illustrated the methods in a case study using 15 datasets of patients suffering from traumatic brain injury. The permutation test indicated that the validation and development set were homogenous in scenario 1 (in almost all simulated samples) and heterogeneous in scenario 2 (in 17%-39% of simulated samples). Previously proposed benchmark values of the c-statistic and the standard deviation of the linear predictors correctly pointed at the more heterogeneous case-mix in scenario 1 and the less heterogeneous case-mix in scenario 2. The recently proposed permutation test may provide misleading results when externally validating prediction models in the presence of case-mix differences between the development and validation population. To correctly interpret the c-statistic found at external validation it is crucial to disentangle case-mix differences from incorrect regression coefficients.

  4. Compositional characteristics of some Apollo 14 clastic materials.

    NASA Technical Reports Server (NTRS)

    Lindstrom, M. M.; Duncan, A. R.; Fruchter, J. S.; Mckay, S. M.; Stoeser, J. W.; Goles, G. G.; Lindstrom, D. J.

    1972-01-01

    Eighty-two subsamples of Apollo 14 materials have been analyzed by instrumental neutron activation analysis techniques for as many as 25 elements. In many cases, it was necessary to develop new procedures to allow analyses of small specimens. Compositional relationships among Apollo 14 materials indicate that there are small but systematic differences between regolith from the valley terrain and that from Cone Crater ejecta. Fragments from 1-2 mm size fractions of regolith samples may be divided into compositional classes, and the 'soil breccias' among them are very similar to valley soils. Multicomponent linear mixing models have been used as interpretive tools in dealing with data on regolith fractions and subsamples from breccia 14321. These mixing models show systematic compositional variations with inferred age for Apollo 14 clastic materials.

  5. A solution procedure for mixed-integer nonlinear programming formulation of supply chain planning with quantity discounts under demand uncertainty

    NASA Astrophysics Data System (ADS)

    Yin, Sisi; Nishi, Tatsushi

    2014-11-01

    Quantity discount policy is decision-making for trade-off prices between suppliers and manufacturers while production is changeable due to demand fluctuations in a real market. In this paper, quantity discount models which consider selection of contract suppliers, production quantity and inventory simultaneously are addressed. The supply chain planning problem with quantity discounts under demand uncertainty is formulated as a mixed-integer nonlinear programming problem (MINLP) with integral terms. We apply an outer-approximation method to solve MINLP problems. In order to improve the efficiency of the proposed method, the problem is reformulated as a stochastic model replacing the integral terms by using a normalisation technique. We present numerical examples to demonstrate the efficiency of the proposed method.

  6. Estimation and confidence intervals for empirical mixing distributions

    USGS Publications Warehouse

    Link, W.A.; Sauer, J.R.

    1995-01-01

    Questions regarding collections of parameter estimates can frequently be expressed in terms of an empirical mixing distribution (EMD). This report discusses empirical Bayes estimation of an EMD, with emphasis on the construction of interval estimates. Estimation of the EMD is accomplished by substitution of estimates of prior parameters in the posterior mean of the EMD. This procedure is examined in a parametric model (the normal-normal mixture) and in a semi-parametric model. In both cases, the empirical Bayes bootstrap of Laird and Louis (1987, Journal of the American Statistical Association 82, 739-757) is used to assess the variability of the estimated EMD arising from the estimation of prior parameters. The proposed methods are applied to a meta-analysis of population trend estimates for groups of birds.

  7. A comparative study of two codes with an improved two-equation turbulence model for predicting jet plumes

    NASA Technical Reports Server (NTRS)

    Balakrishnan, L.; Abdol-Hamid, Khaled S.

    1992-01-01

    Compressible jet plumes were studied using a two-equation turbulence model. A space marching procedure based on an upwind numerical scheme was used to solve the governing equations and turbulence transport equations. The computed results indicate that extending the space marching procedure for solving supersonic/subsonic mixing problems can be stable, efficient and accurate. Moreover, a newly developed correction for compressible dissipation has been verified in fully expanded and underexpanded jet plumes. For a sonic jet plume, no improvement in results over the standard two-equation model was seen. However for a supersonic jet plume, the correction due to compressible dissipation successfully predicted the reduced spreading rate of the jet compared to the sonic case. The computed results were generally in good agreement with the experimental data.

  8. Designing a mixed methods study in primary care.

    PubMed

    Creswell, John W; Fetters, Michael D; Ivankova, Nataliya V

    2004-01-01

    Mixed methods or multimethod research holds potential for rigorous, methodologically sound investigations in primary care. The objective of this study was to use criteria from the literature to evaluate 5 mixed methods studies in primary care and to advance 3 models useful for designing such investigations. We first identified criteria from the social and behavioral sciences to analyze mixed methods studies in primary care research. We then used the criteria to evaluate 5 mixed methods investigations published in primary care research journals. Of the 5 studies analyzed, 3 included a rationale for mixing based on the need to develop a quantitative instrument from qualitative data or to converge information to best understand the research topic. Quantitative data collection involved structured interviews, observational checklists, and chart audits that were analyzed using descriptive and inferential statistical procedures. Qualitative data consisted of semistructured interviews and field observations that were analyzed using coding to develop themes and categories. The studies showed diverse forms of priority: equal priority, qualitative priority, and quantitative priority. Data collection involved quantitative and qualitative data gathered both concurrently and sequentially. The integration of the quantitative and qualitative data in these studies occurred between data analysis from one phase and data collection from a subsequent phase, while analyzing the data, and when reporting the results. We recommend instrument-building, triangulation, and data transformation models for mixed methods designs as useful frameworks to add rigor to investigations in primary care. We also discuss the limitations of our study and the need for future research.

  9. Functional Nonlinear Mixed Effects Models For Longitudinal Image Data

    PubMed Central

    Luo, Xinchao; Zhu, Lixing; Kong, Linglong; Zhu, Hongtu

    2015-01-01

    Motivated by studying large-scale longitudinal image data, we propose a novel functional nonlinear mixed effects modeling (FN-MEM) framework to model the nonlinear spatial-temporal growth patterns of brain structure and function and their association with covariates of interest (e.g., time or diagnostic status). Our FNMEM explicitly quantifies a random nonlinear association map of individual trajectories. We develop an efficient estimation method to estimate the nonlinear growth function and the covariance operator of the spatial-temporal process. We propose a global test and a simultaneous confidence band for some specific growth patterns. We conduct Monte Carlo simulation to examine the finite-sample performance of the proposed procedures. We apply FNMEM to investigate the spatial-temporal dynamics of white-matter fiber skeletons in a national database for autism research. Our FNMEM may provide a valuable tool for charting the developmental trajectories of various neuropsychiatric and neurodegenerative disorders. PMID:26213453

  10. Comparison of two non-convex mixed-integer nonlinear programming algorithms applied to autoregressive moving average model structure and parameter estimation

    NASA Astrophysics Data System (ADS)

    Uilhoorn, F. E.

    2016-10-01

    In this article, the stochastic modelling approach proposed by Box and Jenkins is treated as a mixed-integer nonlinear programming (MINLP) problem solved with a mesh adaptive direct search and a real-coded genetic class of algorithms. The aim is to estimate the real-valued parameters and non-negative integer, correlated structure of stationary autoregressive moving average (ARMA) processes. The maximum likelihood function of the stationary ARMA process is embedded in Akaike's information criterion and the Bayesian information criterion, whereas the estimation procedure is based on Kalman filter recursions. The constraints imposed on the objective function enforce stability and invertibility. The best ARMA model is regarded as the global minimum of the non-convex MINLP problem. The robustness and computational performance of the MINLP solvers are compared with brute-force enumeration. Numerical experiments are done for existing time series and one new data set.

  11. New generation HMA mix designs : accelerated pavement testing of a type C mix with the ALF machine.

    DOT National Transportation Integrated Search

    2012-09-01

    Recent changes to the Texas hot-mix asphalt (HMA) mix-design procedures, such as the adaption of the higher-stiffer performance-grade asphalt-binder grades and the Hamburg test, have ensured that the mixes that are routinely used on Texas highways ar...

  12. Economic appraisal of the angioplasty procedures performed in 2004 in a high-volume diagnostic and interventional cardiology unit.

    PubMed

    Manari, Antonio; Costa, Elena; Scivales, Alessandro; Ponzi, Patrizia; Di Stasi, Francesca; Guiducci, Vincenzo; Pignatelli, Gianluca; Giacometti, Paola

    2007-10-01

    Growing interest in the use of drug-eluting stents (DESs) in coronary angioplasty has prompted the Healthcare Agency of the Emilia Romagna Region to draw up recommendations for their appropriate clinical use in high-risk patients. Since the adoption of any new technology necessitates economic appraisal, we analysed the resource consumption of the various types of angioplasty procedures and the impact on the budget of a cardiology department. A retrospective economic appraisal was carried out on the coronary angioplasty procedures performed in 2004 in the Department of Interventional Cardiology of Reggio Emilia. On the basis of the principles of activity-based costing, detailed hospital costs were estimated for each procedure and compared with the relevant diagnosis-related group (DRG) reimbursement. In 2004, the Reggio Emilia hospital performed 806 angioplasty procedures for a total expenditure of euro 5,176,268. These were 93 plain old balloon angioplasty procedures (euro 487,329), 401 procedures with bare-metal stents (euro 2,380,071), 249 procedures with DESs (euro 1,827,386) and 63 mixed procedures (euro 481,480). Reimbursements amounted to euro 5,816,748 (11% from plain old balloon angioplasty, 50% from bare-metal stent, 31% from DES and 8% from mixed procedures) with a positive margin of about euro 680,480 between costs incurred and reimbursements obtained, even if the reimbursement for DES and mixed procedures was not covering all the incurred costs. Analysis of the case-mix of procedures revealed that an overall positive margin between costs and DRG reimbursements was achieved. It therefore emerges that adherence to the indications of the Healthcare Agency of the Emilia Romagna Region for the appropriate clinical use of DESs is economically sustainable from the hospital enterprise point of view, although the DRG reimbursements are not able to differentiate among resource consumptions owing to the adoption of innovative technologies.

  13. Mix design procedure for crumb rubber modified hot mix asphalt.

    DOT National Transportation Integrated Search

    2005-06-01

    To improve the performance of hot-mix asphalt concrete at high temperatures, crumb-rubber is typically used. Although hot-mix asphalt concrete consisting of crumb-rubber has been successfully placed and have performed well over the years, the laborat...

  14. Mixed models approaches for joint modeling of different types of responses.

    PubMed

    Ivanova, Anna; Molenberghs, Geert; Verbeke, Geert

    2016-01-01

    In many biomedical studies, one jointly collects longitudinal continuous, binary, and survival outcomes, possibly with some observations missing. Random-effects models, sometimes called shared-parameter models or frailty models, received a lot of attention. In such models, the corresponding variance components can be employed to capture the association between the various sequences. In some cases, random effects are considered common to various sequences, perhaps up to a scaling factor; in others, there are different but correlated random effects. Even though a variety of data types has been considered in the literature, less attention has been devoted to ordinal data. For univariate longitudinal or hierarchical data, the proportional odds mixed model (POMM) is an instance of the generalized linear mixed model (GLMM; Breslow and Clayton, 1993). Ordinal data are conveniently replaced by a parsimonious set of dummies, which in the longitudinal setting leads to a repeated set of dummies. When ordinal longitudinal data are part of a joint model, the complexity increases further. This is the setting considered in this paper. We formulate a random-effects based model that, in addition, allows for overdispersion. Using two case studies, it is shown that the combination of random effects to capture association with further correction for overdispersion can improve the model's fit considerably and that the resulting models allow to answer research questions that could not be addressed otherwise. Parameters can be estimated in a fairly straightforward way, using the SAS procedure NLMIXED.

  15. Experimental study of stratified jet by simultaneous measurements of velocity and density fields

    NASA Astrophysics Data System (ADS)

    Xu, Duo; Chen, Jun

    2012-07-01

    Stratified flows with small density difference commonly exist in geophysical and engineering applications, which often involve interaction of turbulence and buoyancy effect. A combined particle image velocimetry (PIV) and planar laser-induced fluorescence (PLIF) system is developed to measure the velocity and density fields in a dense jet discharged horizontally into a tank filled with light fluid. The illumination of PIV particles and excitation of PLIF dye are achieved by a dual-head pulsed Nd:YAG laser and two CCD cameras with a set of optical filters. The procedure for matching refractive indexes of two fluids and calibration of the combined system are presented, as well as a quantitative analysis of the measurement uncertainties. The flow structures and mixing dynamics within the central vertical plane are studied by examining the averaged parameters, turbulent kinetic energy budget, and modeling of momentum flux and buoyancy flux. At downstream, profiles of velocity and density display strong asymmetry with respect to its center. This is attributed to the fact that stable stratification reduces mixing and unstable stratification enhances mixing. In stable stratification region, most of turbulence production is consumed by mean-flow convection, whereas in unstable stratification region, turbulence production is nearly balanced by viscous dissipation. Experimental data also indicate that at downstream locations, mixing length model performs better in mixing zone of stable stratification regions, whereas in other regions, eddy viscosity/diffusivity models with static model coefficients represent effectively momentum and buoyancy flux terms. The measured turbulent Prandtl number displays strong spatial variation in the stratified jet.

  16. An Investigation of the Accuracy of Alternative Methods of True Score Estimation in High-Stakes Mixed-Format Examinations.

    ERIC Educational Resources Information Center

    Klinger, Don A.; Rogers, W. Todd

    2003-01-01

    The estimation accuracy of procedures based on classical test score theory and item response theory (generalized partial credit model) were compared for examinations consisting of multiple-choice and extended-response items. Analysis of British Columbia Scholarship Examination results found an error rate of about 10 percent for both methods, with…

  17. Service Learning and Its Influenced to Pre-Service Teachers: Social Responsibility and Self-Efficacy Study

    ERIC Educational Resources Information Center

    Prasertsang, Parichart; Nuangchalerm, Prasart; Pumipuntu, Chaloey

    2013-01-01

    The purpose of the research was to study pre-service teachers on social responsibility and self-efficacy through service learning. The mixed methodology included two major procedures (i) the actual use of a developed service learning instructional model by means of action research principles and qualitative research and (ii) the study into the…

  18. Solutions for Determining the Significance Region Using the Johnson-Neyman Type Procedure in Generalized Linear (Mixed) Models

    ERIC Educational Resources Information Center

    Lazar, Ann A.; Zerbe, Gary O.

    2011-01-01

    Researchers often compare the relationship between an outcome and covariate for two or more groups by evaluating whether the fitted regression curves differ significantly. When they do, researchers need to determine the "significance region," or the values of the covariate where the curves significantly differ. In analysis of covariance (ANCOVA),…

  19. Molecular Modeling and Physicochemical Properties of Supramolecular Complexes of Limonene with α- and β-Cyclodextrins.

    PubMed

    Dos Passos Menezes, Paula; Dos Santos, Polliana Barbosa Pereira; Dória, Grace Anne Azevedo; de Sousa, Bruna Maria Hipólito; Serafini, Mairim Russo; Nunes, Paula Santos; Quintans-Júnior, Lucindo José; de Matos, Iara Lisboa; Alves, Péricles Barreto; Bezerra, Daniel Pereira; Mendonça Júnior, Francisco Jaime Bezerra; da Silva, Gabriel Francisco; de Aquino, Thiago Mendonça; de Souza Bento, Edson; Scotti, Marcus Tullius; Scotti, Luciana; de Souza Araujo, Adriano Antunes

    2017-02-01

    This study evaluated three different methods for the formation of an inclusion complex between alpha- and beta-cyclodextrin (α- and β-CD) and limonene (LIM) with the goal of improving the physicochemical properties of limonene. The study samples were prepared through physical mixing (PM), paste complexation (PC), and slurry complexation (SC) methods in the molar ratio of 1:1 (cyclodextrin:limonene). The complexes prepared were evaluated with thermogravimetry/derivate thermogravimetry, infrared spectroscopy, X-ray diffraction, complexation efficiency through gas chromatography/mass spectrometry analyses, molecular modeling, and nuclear magnetic resonance. The results showed that the physical mixing procedure did not produce complexation, but the paste and slurry methods produced inclusion complexes, which demonstrated interactions outside of the cavity of the CDs. However, the paste obtained with β-cyclodextrin did not demonstrate complexation in the gas chromatographic technique because, after extraction, most of the limonene was either surface-adsorbed by β-cyclodextrin or volatilized during the procedure. We conclude that paste complexation and slurry complexation are effective and economic methods to improve the physicochemical character of limonene and could have important applications in pharmacological activities in terms of an increase in solubility.

  20. Application of Benchmark Examples to Assess the Single and Mixed-Mode Static Delamination Propagation Capabilities in ANSYS

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald

    2012-01-01

    The application of benchmark examples for the assessment of quasi-static delamination propagation capabilities is demonstrated for ANSYS. The examples are independent of the analysis software used and allow the assessment of the automated delamination propagation in commercial finite element codes based on the virtual crack closure technique (VCCT). The examples selected are based on two-dimensional finite element models of Double Cantilever Beam (DCB), End-Notched Flexure (ENF), Mixed-Mode Bending (MMB) and Single Leg Bending (SLB) specimens. First, the quasi-static benchmark examples were recreated for each specimen using the current implementation of VCCT in ANSYS . Second, the delamination was allowed to propagate under quasi-static loading from its initial location using the automated procedure implemented in the finite element software. Third, the load-displacement relationship from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. The benchmarking procedure proved valuable by highlighting the issues associated with choosing the input parameters of the particular implementation. Overall the results are encouraging, but further assessment for three-dimensional solid models is required.

  1. Analysis of aircraft tires via semianalytic finite elements

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Kim, Kyun O.; Tanner, John A.

    1990-01-01

    A computational procedure is presented for the geometrically nonlinear analysis of aircraft tires. The tire was modeled by using a two-dimensional laminated anisotropic shell theory with the effects of variation in material and geometric parameters included. The four key elements of the procedure are: (1) semianalytic finite elements in which the shell variables are represented by Fourier series in the circumferential direction and piecewise polynomials in the meridional direction; (2) a mixed formulation with the fundamental unknowns consisting of strain parameters, stress-resultant parameters, and generalized displacements; (3) multilevel operator splitting to effect successive simplifications, and to uncouple the equations associated with different Fourier harmonics; and (4) multilevel iterative procedures and reduction techniques to generate the response of the shell.

  2. 40 CFR Appendix 3 to Subpart A of... - Procedure for Mixing Base Fluids With Sediments

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Procedure for Mixing Base Fluids With Sediments 3 Appendix 3 to Subpart A of Part 435 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS OIL AND GAS EXTRACTION POINT SOURCE CATEGORY Offshore...

  3. 40 CFR Appendix 3 to Subpart A of... - Procedure for Mixing Base Fluids With Sediments

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 30 2011-07-01 2011-07-01 false Procedure for Mixing Base Fluids With Sediments 3 Appendix 3 to Subpart A of Part 435 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS OIL AND GAS EXTRACTION POINT SOURCE CATEGORY Offshore...

  4. Bi-Factor MIRT Observed-Score Equating for Mixed-Format Tests

    ERIC Educational Resources Information Center

    Lee, Guemin; Lee, Won-Chan

    2016-01-01

    The main purposes of this study were to develop bi-factor multidimensional item response theory (BF-MIRT) observed-score equating procedures for mixed-format tests and to investigate relative appropriateness of the proposed procedures. Using data from a large-scale testing program, three types of pseudo data sets were formulated: matched samples,…

  5. Installation report : rubber modified asphalt mix.

    DOT National Transportation Integrated Search

    1983-01-01

    This report describes the design of an asphalt mix containing up to 3.0% closed cell waste rubber and a field installation of the mix. The Marshall design procedure was used to determine the asphalt content for the mix containing 3.0% rubber as well ...

  6. Factors which influence the behavior of turbofan forced mixer nozzles

    NASA Technical Reports Server (NTRS)

    Anderson, B. H.; Povinelli, L. A.

    1981-01-01

    A finite difference procedure was used to compute the mixing for three experimentally tested mixer geometries. Good agreement was obtained between analysis and experiment when the mechanisms responsible for secondary flow generation were properly modeled. Vorticity generation due to flow turning and vorticity generated within the centerbody lobe passage were found to be important. Results are presented for two different temperature ratios between fan and core streams and for two different free stream turbulence levels. It was concluded that the dominant mechanisms in turbofan mixers is associated with the secondary flows arising within the lobe region and their development within the mixing section.

  7. Predictors of Variation in Neurosurgical Supply Costs and Outcomes Across 4904 Surgeries at a Single Institution.

    PubMed

    Zygourakis, Corinna C; Valencia, Victoria; Boscardin, Christy; Nayak, Rahul U; Moriates, Christopher; Gonzales, Ralph; Theodosopoulos, Philip; Lawton, Michael T

    2016-12-01

    There is high variability in neurosurgical costs, and surgical supplies constitute a significant portion of cost. Anecdotally, surgeons use different supplies for various reasons, but there is little understanding of how supply choices affect outcomes. Our goal is to evaluate the effect of patient, procedural, and provider factors on supply cost and to determine if supply cost is associated with patient outcomes. We obtained patient information (age, gender, payor, case mix index [CMI], body mass index, admission source), procedural data (procedure type, length, date), provider information (name, case volume), and total surgical supply cost for all inpatient neurosurgical procedures from 2013 to 2014 at our institution (n = 4904). We created mixed-effect models to examine the effect of each factor on surgical supply cost, 30-day readmission, and 30-day mortality. There was significant variation in surgical supply cost between and within procedure types. Older age, female gender, higher CMI, routine/elective admission, longer procedure, and larger surgeon volume were associated with higher surgical supply costs (P < 0.05). Routine/elective admission and higher surgeon volume were associated with lower readmission rates (odds ratio, 0.707, 0.998; P < 0.01). Only patient factors of older age, male gender, private insurance, higher CMI, and emergency admission were associated with higher mortality (odds ratio, 1.029, 1.700, 1.692, 1.080, 2.809). There was no association between surgical supply cost and readmission or mortality (P = 0.307, 0.548). A combination of patient, procedural, and provider factors underlie the significant variation in neurosurgical supply costs at our institution. Surgical supply costs are not correlated with 30-day readmission or mortality. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Methamphetamine-alcohol interactions in murine models of sequential and simultaneous oral drug-taking.

    PubMed

    Fultz, Elissa K; Martin, Douglas L; Hudson, Courtney N; Kippin, Tod E; Szumlinski, Karen K

    2017-08-01

    A high degree of co-morbidity exists between methamphetamine (MA) addiction and alcohol use disorders and both sequential and simultaneous MA-alcohol mixing increases risk for co-abuse. As little preclinical work has focused on the biobehavioral interactions between MA and alcohol within the context of drug-taking behavior, we employed simple murine models of voluntary oral drug consumption to examine how prior histories of either MA- or alcohol-taking influence the intake of the other drug. In one study, mice with a 10-day history of binge alcohol-drinking [5,10, 20 and 40% (v/v); 2h/day] were trained to self-administer oral MA in an operant-conditioning paradigm (10-40mg/L). In a second study, mice with a 10-day history of limited-access oral MA-drinking (5, 10, 20 and 40mg/L; 2h/day) were presented with alcohol (5-40% v/v; 2h/day) and then a choice between solutions of 20% alcohol, 10mg/L MA or their mix. Under operant-conditioning procedures, alcohol-drinking mice exhibited less MA reinforcement overall, than water controls. However, when drug availability was not behaviorally-contingent, alcohol-drinking mice consumed more MA and exhibited greater preference for the 10mg/L MA solution than drug-naïve and combination drug-experienced mice. Conversely, prior MA-drinking history increased alcohol intake across a range of alcohol concentrations. These exploratory studies indicate the feasibility of employing procedurally simple murine models of sequential and simultaneous oral MA-alcohol mixing of relevance to advancing our biobehavioral understanding of MA-alcohol co-abuse. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Mechanical Properties of Warm Mix Asphalt Prepared Using Foamed Asphalt Binders : Executive Summary Report

    DOT National Transportation Integrated Search

    2011-03-01

    Hot mix asphalt (HMA) is a mixture containing aggregates and asphalt binders prepared at specified : proportions. The aggregates and asphalt binder proportions are determined through a mix design : procedure such as the Marshall Mix Design or the Sup...

  10. Mechanical properties of warm mix asphalt prepared using foamed asphalt binders : executive summary report.

    DOT National Transportation Integrated Search

    2011-03-01

    Hot mix asphalt (HMA) is a mixture containing aggregates and asphalt binders prepared at specified : proportions. The aggregates and asphalt binder proportions are determined through a mix design : procedure such as the Marshall Mix Design or the Sup...

  11. Development and Applications of Benchmark Examples for Static Delamination Propagation Predictions

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald

    2013-01-01

    The development and application of benchmark examples for the assessment of quasistatic delamination propagation capabilities was demonstrated for ANSYS (TradeMark) and Abaqus/Standard (TradeMark). The examples selected were based on finite element models of Double Cantilever Beam (DCB) and Mixed-Mode Bending (MMB) specimens. First, quasi-static benchmark results were created based on an approach developed previously. Second, the delamination was allowed to propagate under quasi-static loading from its initial location using the automated procedure implemented in ANSYS (TradeMark) and Abaqus/Standard (TradeMark). Input control parameters were varied to study the effect on the computed delamination propagation. Overall, the benchmarking procedure proved valuable by highlighting the issues associated with choosing the appropriate input parameters for the VCCT implementations in ANSYS® and Abaqus/Standard®. However, further assessment for mixed-mode delamination fatigue onset and growth is required. Additionally studies should include the assessment of the propagation capabilities in more complex specimens and on a structural level.

  12. Academic status does not affect outcome following complex hepato-pancreato-biliary procedures.

    PubMed

    Altieri, Maria S; Yang, Jie; Groves, Donald; Yin, Donglei; Cagino, Kristen; Talamini, Mark; Pryor, Aurora

    2018-05-01

    There is a growing debate regarding outcomes following complex hepato-pancreato-biliary (HPB) procedures. The purpose of our study is to examine if facility type has any impact on complications, readmission rates, emergency department (ED) visit rates, and length of stay (LOS) for patients undergoing HPB surgery. The SPARCS administrative database was used to identify patients undergoing complex HPB procedures between 2012 and 2014 in New York. Univariate generalized linear mixed models were fit to estimate the marginal association between outcomes such as overall/severe complication rates, 30-day and 1-year readmission rates, 30-day and 1-year ED-visit rates, and potential risk factors. Univariate linear mixed models were used to estimate the marginal association between possible risk factors and LOS. Facility type, as well as any variables found to be significant in our univariate analysis (p = 0.05), was further included in the multivariable regression models. There were 4122 complex HPB procedures performed. Academic facilities were more likely to have a higher hospital volume (p < 0001). Surgery at academic facilities were less likely to have coexisting comorbidities; however, they were more likely to have metastatic cancer and/or liver disease (p = 0.0114, < 0. 0001, and = 0.0299, respectively). Postoperatively, patients at non-academic facilities experienced higher overall complication rates, and higher severe complication rates, when compared to those at academic facilities (p < 0.0001 and = 0.0018, respectively). Further analysis via adjustment for possible confounding factors, however, revealed no significant difference in the risk of severe complications between the two facility types. Such adjustment also demonstrated higher 30-day readmission risk in patients who underwent their surgery at an academic facility. No significant difference was found when comparing the outcomes of academic and non-academic facilities, after adjusting for age, gender, race, region, insurance, and hospital volume. Patients from academic facilities were more likely to be readmitted within the first 30-days after surgery.

  13. Mathematical programming formulations for satellite synthesis

    NASA Technical Reports Server (NTRS)

    Bhasin, Puneet; Reilly, Charles H.

    1987-01-01

    The problem of satellite synthesis can be described as optimally allotting locations and sometimes frequencies and polarizations, to communication satellites so that interference from unwanted satellite signals does not exceed a specified threshold. In this report, mathematical programming models and optimization methods are used to solve satellite synthesis problems. A nonlinear programming formulation which is solved using Zoutendijk's method and a gradient search method is described. Nine mixed integer programming models are considered. Results of computer runs with these nine models and five geographically compatible scenarios are presented and evaluated. A heuristic solution procedure is also used to solve two of the models studied. Heuristic solutions to three large synthesis problems are presented. The results of our analysis show that the heuristic performs very well, both in terms of solution quality and solution time, on the two models to which it was applied. It is concluded that the heuristic procedure is the best of the methods considered for solving satellite synthesis problems.

  14. 40 CFR Appendix 3 to Subpart A of... - Procedure for Mixing Base Fluids With Sediments (EPA Method 1646)

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 31 2012-07-01 2012-07-01 false Procedure for Mixing Base Fluids With Sediments (EPA Method 1646) 3 Appendix 3 to Subpart A of Part 435 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS (CONTINUED) OIL AND GAS EXTRACTION POINT...

  15. Grade 11 Students' Interconnected Use of Conceptual Knowledge, Procedural Skills, and Strategic Competence in Algebra: A Mixed Method Study of Error Analysis

    ERIC Educational Resources Information Center

    Egodawatte, Gunawardena; Stoilescu, Dorian

    2015-01-01

    The purpose of this mixed-method study was to investigate grade 11 university/college stream mathematics students' difficulties in applying conceptual knowledge, procedural skills, strategic competence, and algebraic thinking in solving routine (instructional) algebraic problems. A standardized algebra test was administered to thirty randomly…

  16. Identification of treatment responders based on multiple longitudinal outcomes with applications to multiple sclerosis patients.

    PubMed

    Kondo, Yumi; Zhao, Yinshan; Petkau, John

    2017-05-30

    Identification of treatment responders is a challenge in comparative studies where treatment efficacy is measured by multiple longitudinally collected continuous and count outcomes. Existing procedures often identify responders on the basis of only a single outcome. We propose a novel multiple longitudinal outcome mixture model that assumes that, conditionally on a cluster label, each longitudinal outcome is from a generalized linear mixed effect model. We utilize a Monte Carlo expectation-maximization algorithm to obtain the maximum likelihood estimates of our high-dimensional model and classify patients according to their estimated posterior probability of being a responder. We demonstrate the flexibility of our novel procedure on two multiple sclerosis clinical trial datasets with distinct data structures. Our simulation study shows that incorporating multiple outcomes improves the responder identification performance; this can occur even if some of the outcomes are ineffective. Our general procedure facilitates the identification of responders who are comprehensively defined by multiple outcomes from various distributions. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  17. Parameterization of large-scale turbulent diffusion in the presence of both well-mixed and weakly mixed patchy layers

    NASA Astrophysics Data System (ADS)

    Osman, M. K.; Hocking, W. K.; Tarasick, D. W.

    2016-06-01

    Vertical diffusion and mixing of tracers in the upper troposphere and lower stratosphere (UTLS) are not uniform, but primarily occur due to patches of turbulence that are intermittent in time and space. The effective diffusivity of regions of patchy turbulence is related to statistical parameters describing the morphology of turbulent events, such as lifetime, number, width, depth and local diffusivity (i.e., diffusivity within the turbulent patch) of the patches. While this has been recognized in the literature, the primary focus has been on well-mixed layers, with few exceptions. In such cases the local diffusivity is irrelevant, but this is not true for weakly and partially mixed layers. Here, we use both theory and numerical simulations to consider the impact of intermediate and weakly mixed layers, in addition to well-mixed layers. Previous approaches have considered only one dimension (vertical), and only a small number of layers (often one at each time step), and have examined mixing of constituents. We consider a two-dimensional case, with multiple layers (10 and more, up to hundreds and even thousands), having well-defined, non-infinite, lengths and depths. We then provide new formulas to describe cases involving well-mixed layers which supersede earlier expressions. In addition, we look in detail at layers that are not well mixed, and, as an interesting variation on previous models, our procedure is based on tracking the dispersion of individual particles, which is quite different to the earlier approaches which looked at mixing of constituents. We develop an expression which allows determination of the degree of mixing, and show that layers used in some previous models were in fact not well mixed and so produced erroneous results. We then develop a generalized model based on two dimensional random-walk theory employing Rayleigh distributions which allows us to develop a universal formula for diffusion rates for multiple two-dimensional layers with general degrees of mixing. We show that it is the largest, most vigorous and less common turbulent layers that make the major contribution to global diffusion. Finally, we make estimates of global-scale diffusion coefficients in the lower stratosphere and upper troposphere. For the lower stratosphere, κeff ≈ 2x10-2 m2 s-1, assuming no other processes contribute to large-scale diffusion.

  18. HIPPO Unit Commitment Version 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2017-01-17

    Developed for the Midcontinent Independent System Operator, Inc. (MISO), HIPPO-Unit Commitment Version 1 is for solving security constrained unit commitment problem. The model was developed to solve MISO's cases. This version of codes includes I/O module to read in MISO's csv files, modules to create a state-based mixed integer programming formulation for solving MIP, and modules to test basic procedures to solve MIP via HPC.

  19. Effective Simulation of Delamination in Aeronautical Structures Using Shells and Cohesive Elements

    NASA Technical Reports Server (NTRS)

    Davila, Carlos G.; Camanho, Pedro P.; Turon, Albert

    2007-01-01

    A cohesive element for shell analysis is presented. The element can be used to simulate the initiation and growth of delaminations between stacked, non-coincident layers of shell elements. The procedure to construct the element accounts for the thickness offset by applying the kinematic relations of shell deformation to transform the stiffness and internal force of a zero-thickness cohesive element such that interfacial continuity between the layers is enforced. The procedure is demonstrated by simulating the response and failure of the Mixed Mode Bending test and a skin-stiffener debond specimen. In addition, it is shown that stacks of shell elements can be used to create effective models to predict the inplane and delamination failure modes of thick components. The results indicate that simple shell models can retain many of the necessary predictive attributes of much more complex 3D models while providing the computational efficiency that is necessary for design.

  20. Cohesive Elements for Shells

    NASA Technical Reports Server (NTRS)

    Davila, Carlos G.; Camanho, Pedro P.; Turon, Albert

    2007-01-01

    A cohesive element for shell analysis is presented. The element can be used to simulate the initiation and growth of delaminations between stacked, non-coincident layers of shell elements. The procedure to construct the element accounts for the thickness offset by applying the kinematic relations of shell deformation to transform the stiffness and internal force of a zero-thickness cohesive element such that interfacial continuity between the layers is enforced. The procedure is demonstrated by simulating the response and failure of the Mixed Mode Bending test and a skin-stiffener debond specimen. In addition, it is shown that stacks of shell elements can be used to create effective models to predict the inplane and delamination failure modes of thick components. The results indicate that simple shell models can retain many of the necessary predictive attributes of much more complex 3D models while providing the computational efficiency that is necessary for design.

  1. Improving Transportation Services for the University of the Thai Chamber of Commerce: A Case Study on Solving the Mixed-Fleet Vehicle Routing Problem with Split Deliveries

    NASA Astrophysics Data System (ADS)

    Suthikarnnarunai, N.; Olinick, E.

    2009-01-01

    We present a case study on the application of techniques for solving the Vehicle Routing Problem (VRP) to improve the transportation service provided by the University of The Thai Chamber of Commerce to its staff. The problem is modeled as VRP with time windows, split deliveries, and a mixed fleet. An exact algorithm and a heuristic solution procedure are developed to solve the problem and implemented in the AMPL modeling language and CPLEX Integer Programming solver. Empirical results indicate that the heuristic can find relatively good solutions in a small fraction of the time required by the exact method. We also perform sensitivity analysis and find that a savings in outsourcing cost can be achieved with a small increase in vehicle capacity.

  2. Real medical benefit assessed by indirect comparison.

    PubMed

    Falissard, Bruno; Zylberman, Myriam; Cucherat, Michel; Izard, Valérie; Meyer, François

    2009-01-01

    Frequently, in data packages submitted for Marketing Approval to the CHMP, there is a lack of relevant head-to-head comparisons of medicinal products that could enable national authorities responsible for the approval of reimbursement to assess the Added Therapeutic Value (ASMR) of new clinical entities or line extensions of existing therapies.Indirect or mixed treatment comparisons (MTC) are methods stemming from the field of meta-analysis that have been designed to tackle this problem. Adjusted indirect comparisons, meta-regressions, mixed models, Bayesian network analyses pool results of randomised controlled trials (RCTs), enabling a quantitative synthesis.The REAL procedure, recently developed by the HAS (French National Authority for Health), is a mixture of an MTC and effect model based on expert opinions. It is intended to translate the efficacy observed in the trials into effectiveness expected in day-to-day clinical practice in France.

  3. Computational Analyses of Pressurization in Cryogenic Tanks

    NASA Technical Reports Server (NTRS)

    Ahuja, Vineet; Hosangadi, Ashvin; Lee, Chun P.; Field, Robert E.; Ryan, Harry

    2010-01-01

    A comprehensive numerical framework utilizing multi-element unstructured CFD and rigorous real fluid property routines has been developed to carry out analyses of propellant tank and delivery systems at NASA SSC. Traditionally CFD modeling of pressurization and mixing in cryogenic tanks has been difficult primarily because the fluids in the tank co-exist in different sub-critical and supercritical states with largely varying properties that have to be accurately accounted for in order to predict the correct mixing and phase change between the ullage and the propellant. For example, during tank pressurization under some circumstances, rapid mixing of relatively warm pressurant gas with cryogenic propellant can lead to rapid densification of the gas and loss of pressure in the tank. This phenomenon can cause serious problems during testing because of the resulting decrease in propellant flow rate. With proper physical models implemented, CFD can model the coupling between the propellant and pressurant including heat transfer and phase change effects and accurately capture the complex physics in the evolving flowfields. This holds the promise of allowing the specification of operational conditions and procedures that could minimize the undesirable mixing and heat transfer inherent in propellant tank operation. In our modeling framework, we incorporated two different approaches to real fluids modeling: (a) the first approach is based on the HBMS model developed by Hirschfelder, Beuler, McGee and Sutton and (b) the second approach is based on a cubic equation of state developed by Soave, Redlich and Kwong (SRK). Both approaches cover fluid properties and property variation spanning sub-critical gas and liquid states as well as the supercritical states. Both models were rigorously tested and properties for common fluids such as oxygen, nitrogen, hydrogen etc were compared against NIST data in both the sub-critical as well as supercritical regimes.

  4. Magnetic anisotropy in binuclear complexes in the weak-exchange limit: From the multispin to the giant-spin Hamiltonian

    NASA Astrophysics Data System (ADS)

    Maurice, Rémi; de Graaf, Coen; Guihéry, Nathalie

    2010-06-01

    This paper studies the physical basis of the giant-spin Hamiltonian, which is usually used to describe the anisotropy of single-molecule magnets. A rigorous extraction of the model has been performed in the weak-exchange limit of a binuclear centrosymmetric Ni(II) complex, using correlated ab initio calculations and effective Hamiltonian theory. It is shown that the giant-spin Hamiltonian is not appropriate to describe polynuclear complexes as soon as spin mixing becomes non-negligible. A relevant model is proposed involving fourth-order operators, different from the traditionally used Stevens operators. The new giant-spin Hamiltonian correctly reproduces the effects of the spin mixing in the weak-exchange limit. A procedure to switch on and off the spin mixing in the extraction has been implemented in order to separate this effect from other anisotropic effects and to numerically evaluate both contributions to the tunnel splitting. Furthermore, the new giant-spin Hamiltonian has been derived analytically from the multispin Hamiltonian at the second order of perturbation and the theoretical link between the two models is studied to gain understanding concerning the microscopic origin of the fourth-order interaction in terms of axial, rhombic, or mixed (axial-rhombic) character. Finally, an adequate method is proposed to extract the proper magnetic axes frame for polynuclear anisotropic systems.

  5. Binary encoding of multiplexed images in mixed noise.

    PubMed

    Lalush, David S

    2008-09-01

    Binary coding of multiplexed signals and images has been studied in the context of spectroscopy with models of either purely constant or purely proportional noise, and has been shown to result in improved noise performance under certain conditions. We consider the case of mixed noise in an imaging system consisting of multiple individually-controllable sources (X-ray or near-infrared, for example) shining on a single detector. We develop a mathematical model for the noise in such a system and show that the noise is dependent on the properties of the binary coding matrix and on the average number of sources used for each code. Each binary matrix has a characteristic linear relationship between the ratio of proportional-to-constant noise and the noise level in the decoded image. We introduce a criterion for noise level, which is minimized via a genetic algorithm search. The search procedure results in the discovery of matrices that outperform the Hadamard S-matrices at certain levels of mixed noise. Simulation of a seven-source radiography system demonstrates that the noise model predicts trends and rank order of performance in regions of nonuniform images and in a simple tomosynthesis reconstruction. We conclude that the model developed provides a simple framework for analysis, discovery, and optimization of binary coding patterns used in multiplexed imaging systems.

  6. Partitioning strategy for efficient nonlinear finite element dynamic analysis on multiprocessor computers

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Peters, Jeanne M.

    1989-01-01

    A computational procedure is presented for the nonlinear dynamic analysis of unsymmetric structures on vector multiprocessor systems. The procedure is based on a novel hierarchical partitioning strategy in which the response of the unsymmetric and antisymmetric response vectors (modes), each obtained by using only a fraction of the degrees of freedom of the original finite element model. The three key elements of the procedure which result in high degree of concurrency throughout the solution process are: (1) mixed (or primitive variable) formulation with independent shape functions for the different fields; (2) operator splitting or restructuring of the discrete equations at each time step to delineate the symmetric and antisymmetric vectors constituting the response; and (3) two level iterative process for generating the response of the structure. An assessment is made of the effectiveness of the procedure on the CRAY X-MP/4 computers.

  7. 7 CFR 58.638 - Freezing the mix.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 3 2013-01-01 2013-01-01 false Freezing the mix. 58.638 Section 58.638 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards... Procedures § 58.638 Freezing the mix. After the mix enters the freezer, it shall be frozen as rapidly as...

  8. 7 CFR 58.634 - Assembling and combining mix ingredients.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 3 2014-01-01 2014-01-01 false Assembling and combining mix ingredients. 58.634... Service 1 Operations and Operating Procedures § 58.634 Assembling and combining mix ingredients. The assembling and combining of mix ingredients for processing shall be in accordance with clean and sanitary...

  9. 7 CFR 58.638 - Freezing the mix.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 3 2012-01-01 2012-01-01 false Freezing the mix. 58.638 Section 58.638 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards... Procedures § 58.638 Freezing the mix. After the mix enters the freezer, it shall be frozen as rapidly as...

  10. 7 CFR 58.634 - Assembling and combining mix ingredients.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 3 2012-01-01 2012-01-01 false Assembling and combining mix ingredients. 58.634... Service 1 Operations and Operating Procedures § 58.634 Assembling and combining mix ingredients. The assembling and combining of mix ingredients for processing shall be in accordance with clean and sanitary...

  11. 7 CFR 58.634 - Assembling and combining mix ingredients.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 3 2013-01-01 2013-01-01 false Assembling and combining mix ingredients. 58.634... Service 1 Operations and Operating Procedures § 58.634 Assembling and combining mix ingredients. The assembling and combining of mix ingredients for processing shall be in accordance with clean and sanitary...

  12. 7 CFR 58.638 - Freezing the mix.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 3 2014-01-01 2014-01-01 false Freezing the mix. 58.638 Section 58.638 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards... Procedures § 58.638 Freezing the mix. After the mix enters the freezer, it shall be frozen as rapidly as...

  13. 7 CFR 58.637 - Cooling the mix.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 3 2012-01-01 2012-01-01 false Cooling the mix. 58.637 Section 58.637 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards... Procedures § 58.637 Cooling the mix. The mix shall be immediately cooled to a temperature of 45 °F. or lower...

  14. 7 CFR 58.635 - Pasteurization of the mix.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 3 2013-01-01 2013-01-01 false Pasteurization of the mix. 58.635 Section 58.635 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards... Procedures § 58.635 Pasteurization of the mix. Every particle of the mix, except added flavoring ingredients...

  15. 7 CFR 58.637 - Cooling the mix.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 3 2014-01-01 2014-01-01 false Cooling the mix. 58.637 Section 58.637 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards... Procedures § 58.637 Cooling the mix. The mix shall be immediately cooled to a temperature of 45 °F. or lower...

  16. 7 CFR 58.637 - Cooling the mix.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 3 2013-01-01 2013-01-01 false Cooling the mix. 58.637 Section 58.637 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards... Procedures § 58.637 Cooling the mix. The mix shall be immediately cooled to a temperature of 45 °F. or lower...

  17. 7 CFR 58.635 - Pasteurization of the mix.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 3 2012-01-01 2012-01-01 false Pasteurization of the mix. 58.635 Section 58.635 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards... Procedures § 58.635 Pasteurization of the mix. Every particle of the mix, except added flavoring ingredients...

  18. 7 CFR 58.635 - Pasteurization of the mix.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 3 2014-01-01 2014-01-01 false Pasteurization of the mix. 58.635 Section 58.635 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards... Procedures § 58.635 Pasteurization of the mix. Every particle of the mix, except added flavoring ingredients...

  19. 29 CFR 1910.426 - Mixed-gas diving.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 5 2010-07-01 2010-07-01 false Mixed-gas diving. 1910.426 Section 1910.426 Labor... OCCUPATIONAL SAFETY AND HEALTH STANDARDS Commercial Diving Operations Specific Operations Procedures § 1910.426 Mixed-gas diving. (a) General. Employers engaged in mixed-gas diving shall comply with the following...

  20. 46 CFR 197.434 - Surface-supplied mixed-gas diving.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 46 Shipping 7 2010-10-01 2010-10-01 false Surface-supplied mixed-gas diving. 197.434 Section 197... HEALTH STANDARDS GENERAL PROVISIONS Commercial Diving Operations Specific Diving Mode Procedures § 197.434 Surface-supplied mixed-gas diving. The diving supervisor shall insure that— (a) When mixed-gas...

  1. Designing A Mixed Methods Study In Primary Care

    PubMed Central

    Creswell, John W.; Fetters, Michael D.; Ivankova, Nataliya V.

    2004-01-01

    BACKGROUND Mixed methods or multimethod research holds potential for rigorous, methodologically sound investigations in primary care. The objective of this study was to use criteria from the literature to evaluate 5 mixed methods studies in primary care and to advance 3 models useful for designing such investigations. METHODS We first identified criteria from the social and behavioral sciences to analyze mixed methods studies in primary care research. We then used the criteria to evaluate 5 mixed methods investigations published in primary care research journals. RESULTS Of the 5 studies analyzed, 3 included a rationale for mixing based on the need to develop a quantitative instrument from qualitative data or to converge information to best understand the research topic. Quantitative data collection involved structured interviews, observational checklists, and chart audits that were analyzed using descriptive and inferential statistical procedures. Qualitative data consisted of semistructured interviews and field observations that were analyzed using coding to develop themes and categories. The studies showed diverse forms of priority: equal priority, qualitative priority, and quantitative priority. Data collection involved quantitative and qualitative data gathered both concurrently and sequentially. The integration of the quantitative and qualitative data in these studies occurred between data analysis from one phase and data collection from a subsequent phase, while analyzing the data, and when reporting the results. DISCUSSION We recommend instrument-building, triangulation, and data transformation models for mixed methods designs as useful frameworks to add rigor to investigations in primary care. We also discuss the limitations of our study and the need for future research. PMID:15053277

  2. Development of a Hybrid RANS/LES Method for Compressible Mixing Layer Simulations

    NASA Technical Reports Server (NTRS)

    Georgiadis, Nicholas J.; Alexander, J. Iwan D.; Reshotko, Eli

    2001-01-01

    A hybrid method has been developed for simulations of compressible turbulent mixing layers. Such mixing layers dominate the flows in exhaust systems of modem day aircraft and also those of hypersonic vehicles currently under development. The hybrid method uses a Reynolds-averaged Navier-Stokes (RANS) procedure to calculate wall bounded regions entering a mixing section, and a Large Eddy Simulation (LES) procedure to calculate the mixing dominated regions. A numerical technique was developed to enable the use of the hybrid RANS/LES method on stretched, non-Cartesian grids. The hybrid RANS/LES method is applied to a benchmark compressible mixing layer experiment. Preliminary two-dimensional calculations are used to investigate the effects of axial grid density and boundary conditions. Actual LES calculations, performed in three spatial directions, indicated an initial vortex shedding followed by rapid transition to turbulence, which is in agreement with experimental observations.

  3. Vertical eddy diffusivity as a control parameter in the tropical Pacific

    NASA Astrophysics Data System (ADS)

    Martinez Avellaneda, N.; Cornuelle, B.

    2011-12-01

    Ocean models suffer from errors in the treatment of turbulent sub-grid-scale motions responsible for mixing and energy dissipation. Unrealistic small-scale physics in models can have large-scale consequences, such as biases in the upper ocean temperature, a symptom of poorly-simulated upwelling, currents and air-sea interactions. This is of special importance in the tropical Pacific Ocean (TP), which is home to energetic air-sea interactions that affect global climate. It has been shown in a number of studies that the simulated ENSO variability is highly dependent on the state of the ocean (e.g.: background mixing). Moreover, the magnitude of the vertical numerical diffusion is of primary importance in properly reproducing the Pacific equatorial thermocline. This work is part of a NASA-funded project to estimate the space- and time-varying ocean mixing coefficients in an eddy-permitting (1/3dgr) model of the TP to obtain an improved estimate of its time-varying circulation and its underlying dynamics. While an estimation procedure for the TP (26dgr S - 30dgr N) in underway using the MIT general circulation model, complementary adjoint-based sensitivity studies have been carried out for the starting ocean state from Forget (2010). This analysis aids the interpretation of the estimated mixing coefficients and possible error compensation. The focus of the sensitivity tests is the Equatorial Undercurrent and sub-thermocline jets (i.e., Tsuchiya Jets), which have been thought to have strong dependence on vertical diffusivity and should provide checks on the estimated mixing parameters. In order to build intuition for the vertical diffusivity adjoint results in the TP, adjoint and forward perturbed simulations were carried out for an idealized sharp thermocline in a rectangular domain.

  4. Mass and momentum turbulent transport experiments with confined swirling coaxial jets

    NASA Technical Reports Server (NTRS)

    Roback, R.; Johnson, B. V.

    1983-01-01

    Swirling coaxial jets mixing downstream, discharging into an expanded duct was conducted to obtain data for the evaluation and improvement of turbulent transport models currently used in a variety of computational procedures throughout the combustion community. A combination of laser velocimeter (LV) and laser induced fluorescence (LIF) techniques was employed to obtain mean and fluctuating velocity and concentration distributions which were used to derive mass and momentum turbulent transport parameters currently incorporated into various combustor flow models. Flow visualization techniques were also employed to determine qualitatively the time dependent characteristics of the flow and the scale of turbulence. The results of these measurements indicated that the largest momentum turbulent transport was in the r-z plane. Peak momentum turbulent transport rates were approximately the same as those for the nonswirling flow condition. The mass turbulent transport process for swirling flow was complicated. Mixing occurred in several steps of axial and radial mass transport and was coupled with a large radial mean convective flux. Mixing for swirling flow was completed in one-third the length required for nonswirling flow.

  5. Nucleosynthesis in neutrino-driven, aspherical Population III supernovae

    NASA Astrophysics Data System (ADS)

    Fujimoto, Shin-ichiro; Hashimoto, Masa-aki; Ono, Masaomi; Kotake, Kei

    2012-09-01

    We investigate explosive nucleosynthesis during neutrino-driven, aspherical supernova (SN) explosion aided by standing accretion shock instability (SASI), based on two-dimensional hydrodynamic simulations of the explosion of 11, 15, 20, 25, 30 and 40M ⊙ stars with zero metallicity. The magnitude and asymmetry of the explosion energy are estimated with simulations, for a given set of neutrino luminosities and temperatures, not as in the previous study in which the explosion is manually and spherically initiated by means of a thermal bomb or a piston and also some artificial mixing procedures are applied for the estimate of abundances of the SN ejecta. By post-processing calculations with a large nuclear reaction network, we have evaluated abundances and masses of ejecta from the aspherical SNe. We find that matter mixing induced via SASI is important for the abundant production of nuclei with atomic number >= 21, in particular Sc, which is underproduced in the spherical models without artificial mixing. We also find that the IMF-averaged abundances are similar to those observed in extremely metal poor stars. However, observed [K/Fe] cannot be reproduced with our aspherical SN models.

  6. Accounting for measurement reliability to improve the quality of inference in dental microhardness research: a worked example.

    PubMed

    Sever, Ivan; Klaric, Eva; Tarle, Zrinka

    2016-07-01

    Dental microhardness experiments are influenced by unobserved factors related to the varying tooth characteristics that affect measurement reproducibility. This paper explores the appropriate analytical tools for modeling different sources of unobserved variability to reduce the biases encountered and increase the validity of microhardness studies. The enamel microhardness of human third molars was measured by Vickers diamond. The effects of five bleaching agents-10, 16, and 30 % carbamide peroxide, and 25 and 38 % hydrogen peroxide-were examined, as well as the effect of artificial saliva and amorphous calcium phosphate. To account for both between- and within-tooth heterogeneity in evaluating treatment effects, the statistical analysis was performed in the mixed-effects framework, which also included the appropriate weighting procedure to adjust for confounding. The results were compared to those of the standard ANOVA model usually applied. The weighted mixed-effects model produced the parameter estimates of different magnitude and significance than the standard ANOVA model. The results of the former model were more intuitive, with more precise estimates and better fit. Confounding could seriously bias the study outcomes, highlighting the need for more robust statistical procedures in dental research that account for the measurement reliability. The presented framework is more flexible and informative than existing analytical techniques and may improve the quality of inference in dental research. Reported results could be misleading if underlying heterogeneity of microhardness measurements is not taken into account. The confidence in treatment outcomes could be increased by applying the framework presented.

  7. AST Critical Propulsion and Noise Reduction Technologies for Future Commercial Subsonic Engines: Separate-Flow Exhaust System Noise Reduction Concept Evaluation

    NASA Technical Reports Server (NTRS)

    Janardan, B. A.; Hoff, G. E.; Barter, J. W.; Martens, S.; Gliebe, P. R.; Mengle, V.; Dalton, W. N.; Saiyed, Naseem (Technical Monitor)

    2000-01-01

    This report describes the work performed by General Electric Aircraft Engines (GEAE) and Allison Engine Company (AEC) on NASA Contract NAS3-27720 AoI 14.3. The objective of this contract was to generate quality jet noise acoustic data for separate-flow nozzle models and to design and verify new jet-noise-reduction concepts over a range of simulated engine cycles and flight conditions. Five baseline axisymmetric separate-flow nozzle models having bypass ratios of five and eight with internal and external plugs and 11 different mixing-enhancer model nozzles (including chevrons, vortex-generator doublets, and a tongue mixer) were designed and tested in model scale. Using available core and fan nozzle hardware in various combinations, 28 GEAE/AEC separate-flow nozzle/mixing-enhancer configurations were acoustically evaluated in the NASA Glenn Research Center Aeroacoustic and Propulsion Laboratory. This report describes model nozzle features, facility and data acquisition/reduction procedures, the test matrix, and measured acoustic data analyses. A number of tested core and fan mixing enhancer devices and combinations of devices gave significant jet noise reduction relative to separate-flow baseline nozzles. Inward-flip and alternating-flip core chevrons combined with a straight-chevron fan nozzle exceeded the NASA stretch goal of 3 EPNdB jet noise reduction at typical sideline certification conditions.

  8. Acceptance procedures for dense-graded mixes

    DOT National Transportation Integrated Search

    2001-03-01

    Recent literature related to acceptance procedures for dense-graded mixtures is summarized. Current state of practice and development of acceptance procedures are reviewed. Many agencies are reducing the number of process control-related parameters i...

  9. Fatigue crack growth and life prediction under mixed-mode loading

    NASA Astrophysics Data System (ADS)

    Sajith, S.; Murthy, K. S. R. K.; Robi, P. S.

    2018-04-01

    Fatigue crack growth life as a function of crack length is essential for the prevention of catastrophic failures from damage tolerance perspective. In damage tolerance design approach, principles of fracture mechanics are usually applied to predict the fatigue life of structural components. Numerical prediction of crack growth versus number of cycles is essential in damage tolerance design. For cracks under mixed mode I/II loading, modified Paris law (d/a d N =C (ΔKe q ) m ) along with different equivalent stress intensity factor (ΔKeq) model is used for fatigue crack growth rate prediction. There are a large number of ΔKeq models available for the mixed mode I/II loading, the selection of proper ΔKeq model has significant impact on fatigue life prediction. In the present investigation, the performance of ΔKeq models in fatigue life prediction is compared with respect to the experimental findings as there are no guidelines/suggestions available on the selection of these models for accurate and/or conservative predictions of fatigue life. Within the limitations of availability of experimental data and currently available numerical simulation techniques, the results of present study attempt to outline models that would provide accurate and conservative life predictions. Such a study aid the numerical analysts or engineers in the proper selection of the model for numerical simulation of the fatigue life. Moreover, the present investigation also suggests a procedure to enhance the accuracy of life prediction using Paris law.

  10. Biotechnology

    NASA Image and Video Library

    2003-05-05

    Aboard the International Space Station (ISS), the Tissue Culture Module (TCM) is the stationary bioreactor vessel in which cell cultures grow. However, for the Cellular Biotechnology Operations Support Systems-Fluid Dynamics Investigation (CBOSS-FDI), color polystyrene beads are used to measure the effectiveness of various mixing procedures. The beads are similar in size and density to human lymphoid cells. Uniform mixing is a crucial component of CBOSS experiments involving the immune response of human lymphoid cell suspensions. The goal is to develop procedures that are both convenient for the flight crew and are optimal in providing uniform and reproducible mixing of all components, including cells. The average bead density in a well mixed TCM will be uniform, with no bubbles, and it will be measured using the absorption of light. In this photograph, a TCM is shown after mixing protocols, and bubbles of various sizes can be seen.

  11. The Effect of Orthodontic Model Fabrication Procedures on Gypsum Materials.

    DTIC Science & Technology

    1992-06-01

    views expressed in this paper are exclusively those of the author and do not reflect those of the United States Air Force Dental Corps, the United...Artificial Stone ............... 6 1.3 Improved Dental Stone . . . . . . . . . . . . . 7 1.4 Manipulation Techniques . . . . . . . . . . . . 10 1.5 Mixing...properties which can be obtained, 9 , 13- 42 gypsum has been found to have applications in most dental specialties, orthodontics being no exception. Since

  12. Multi-Objective Optimization of Mixed Variable, Stochastic Systems Using Single-Objective Formulations

    DTIC Science & Technology

    2008-03-01

    investigated, as well as the methodology used . Chapter IV presents the data collection and analysis procedures, and the resulting analysis and...interpolate the data, although a non-interpolating model is possible. For this research Design and Analysis of Computer Experiments (DACE) is used ...followed by the analysis . 4.1. Testing Approach The initial SMOMADS algorithm used for this research was acquired directly from Walston [70]. The

  13. Topics in Statistical Calibration

    DTIC Science & Technology

    2014-03-27

    on a parametric bootstrap where, instead of sampling directly from the residuals , samples are drawn from a normal distribution. This procedure will...addition to centering them (Davison and Hinkley, 1997). When there are outliers in the residuals , the bootstrap distribution of x̂0 can become skewed or...based and inversion methods using the linear mixed-effects model. Then, a simple parametric bootstrap algorithm is proposed that can be used to either

  14. Experimental Applications of Automatic Test Markup Language (ATML)

    NASA Technical Reports Server (NTRS)

    Lansdowne, Chatwin A.; McCartney, Patrick; Gorringe, Chris

    2012-01-01

    The authors describe challenging use-cases for Automatic Test Markup Language (ATML), and evaluate solutions. The first case uses ATML Test Results to deliver active features to support test procedure development and test flow, and bridging mixed software development environments. The second case examines adding attributes to Systems Modelling Language (SysML) to create a linkage for deriving information from a model to fill in an ATML document set. Both cases are outside the original concept of operations for ATML but are typical when integrating large heterogeneous systems with modular contributions from multiple disciplines.

  15. Progressive Damage Analyses of Skin/Stringer Debonding

    NASA Technical Reports Server (NTRS)

    Daville, Carlos G.; Camanho, Pedro P.; deMoura, Marcelo F.

    2004-01-01

    The debonding of skin/stringer constructions is analyzed using a step-by-step simulation of material degradation based on strain softening decohesion elements and a ply degradation procedure. Decohesion elements with mixed-mode capability are placed at the interface between the skin and the flange to simulate the initiation and propagation of the delamination. In addition, the initiation and accumulation of fiber failure and matrix damage is modeled using Hashin-type failure criteria and their corresponding material degradation schedules. The debonding predictions using simplified three-dimensional models correlate well with test results.

  16. Decision Process to Identify Lessons for Transition to a Distributed (or Blended) Learning Instructional Format

    DTIC Science & Technology

    2009-09-01

    instructional format. Using a mixed- method coding and analysis approach, the sample of POIs were categorized, coded, statistically analyzed, and a... Method SECURITY CLASSIFICATION OF 19. LIMITATION OF 20. NUMBER 21. RESPONSIBLE PERSON 16. REPORT Unclassified 17. ABSTRACT...transition to a distributed (or blended) learning format. Procedure: A mixed- methods approach, combining qualitative coding procedures with basic

  17. INTERDISCIPLINARY PHYSICS AND RELATED AREAS OF SCIENCE AND TECHNOLOGY: Polar Mixing Optical Phonon Spectra in Wurtzite GaN Cylindrical Quantum Dots: Quantum Size and Dielectric Effects

    NASA Astrophysics Data System (ADS)

    Zhang, Li; Liao, Jian-Shang

    2010-05-01

    The interface-optical-propagating (IO-PR) mixing phonon modes of a quasi-zero-dimensional (QoD) wurtzite cylindrical quantum dot (QD) structure are derived and studied by employing the macroscopic dielectric continuum model. The analytical phonon states of IO-PR mixing modes are given. It is found that there are two types of IO-PR mixing phonon modes, i.e. ρ-IO/z-PR mixing modes and the z-IO/ρ-PR mixing modes existing in QoD wurtzite QDs. And each IO-PR mixing modes also have symmetrical and antisymmetrical forms. Via a standard procedure of field quantization, the Fröhlich Hamiltonians of electron-(IO-PR) mixing phonons interaction are obtained. Numerical calculations on a wurtzite GaN cylindrical QD are performed. The results reveal that both the radial-direction size and the axial-direction size as well as the dielectric matrix have great influence on the dispersive frequencies of the IO-PR mixing phonon modes. The limiting features of dispersive curves of these phonon modes are discussed in depth. The phonon modes “reducing" behavior of wurtzite quantum confined systems has been observed obviously in the structures. Moreover, the degenerating behaviors of the IO-PR mixing phonon modes in wurtzite QoD QDs to the IO modes and PR modes in wurtzite Q2D QW and Q1D QWR systems are analyzed deeply from both of the viewpoints of physics and mathematics.

  18. Energy Efficient Engine exhaust mixer model technology report addendum; phase 3 test program

    NASA Technical Reports Server (NTRS)

    Larkin, M. J.; Blatt, J. R.

    1984-01-01

    The Phase 3 exhaust mixer test program was conducted to explore the trends established during previous Phases 1 and 2. Combinations of mixer design parameters were tested. Phase 3 testing showed that the best performance achievable within tailpipe length and diameter constraints is 2.55 percent better than an optimized separate flow base line. A reduced penetration design achieved about the same overall performance level at a substantially lower level of excess pressure loss but with a small reduction in mixing. To improve reliability of the data, the hot and cold flow thrust coefficient analysis used in Phases 1 and 2 was augmented by calculating percent mixing from traverse data. Relative change in percent mixing between configurations was determined from thrust and flow coefficient increments. The calculation procedure developed was found to be a useful tool in assessing mixer performance. Detailed flow field data were obtained to facilitate calibration of computer codes.

  19. Investigations for Thermal and Electrical Conductivity of ABS-Graphene Blended Prototypes

    PubMed Central

    Singh, Rupinder; Sandhu, Gurleen S.; Penna, Rosa; Farina, Ilenia

    2017-01-01

    The thermoplastic materials such as acrylonitrile-butadiene-styrene (ABS) and Nylon have large applications in three-dimensional printing of functional/non-functional prototypes. Usually these polymer-based prototypes are lacking in thermal and electrical conductivity. Graphene (Gr) has attracted impressive enthusiasm in the recent past due to its natural mechanical, thermal, and electrical properties. This paper presents the step by step procedure (as a case study) for development of an in-house ABS-Gr blended composite feedstock filament for fused deposition modelling (FDM) applications. The feedstock filament has been prepared by two different methods (mechanical and chemical mixing). For mechanical mixing, a twin screw extrusion (TSE) process has been used, and for chemical mixing, the composite of Gr in an ABS matrix has been set by chemical dissolution, followed by mechanical blending through TSE. Finally, the electrical and thermal conductivity of functional prototypes prepared from composite feedstock filaments have been optimized. PMID:28773244

  20. Conditional Monte Carlo randomization tests for regression models.

    PubMed

    Parhat, Parwen; Rosenberger, William F; Diao, Guoqing

    2014-08-15

    We discuss the computation of randomization tests for clinical trials of two treatments when the primary outcome is based on a regression model. We begin by revisiting the seminal paper of Gail, Tan, and Piantadosi (1988), and then describe a method based on Monte Carlo generation of randomization sequences. The tests based on this Monte Carlo procedure are design based, in that they incorporate the particular randomization procedure used. We discuss permuted block designs, complete randomization, and biased coin designs. We also use a new technique by Plamadeala and Rosenberger (2012) for simple computation of conditional randomization tests. Like Gail, Tan, and Piantadosi, we focus on residuals from generalized linear models and martingale residuals from survival models. Such techniques do not apply to longitudinal data analysis, and we introduce a method for computation of randomization tests based on the predicted rate of change from a generalized linear mixed model when outcomes are longitudinal. We show, by simulation, that these randomization tests preserve the size and power well under model misspecification. Copyright © 2014 John Wiley & Sons, Ltd.

  1. Acceptance procedures for dense-graded mixes : literature review.

    DOT National Transportation Integrated Search

    2001-03-01

    Recent literature related to acceptance procedures for dense-graded mixtures is summarized. Current state of practice and development of acceptance procedures are reviewed. Many agencies are reducing the number of process control-related parameters i...

  2. Advances in contact algorithms and their application to tires

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Tanner, John A.

    1988-01-01

    Currently used techniques for tire contact analysis are reviewed. Discussion focuses on the different techniques used in modeling frictional forces and the treatment of contact conditions. A status report is presented on a new computational strategy for the modeling and analysis of tires, including the solution of the contact problem. The key elements of the proposed strategy are: (1) use of semianalytic mixed finite elements in which the shell variables are represented by Fourier series in the circumferential direction and piecewise polynomials in the meridional direction; (2) use of perturbed Lagrangian formulation for the determination of the contact area and pressure; and (3) application of multilevel iterative procedures and reduction techniques to generate the response of the tire. Numerical results are presented to demonstrate the effectiveness of a proposed procedure for generating the tire response associated with different Fourier harmonics.

  3. Trends in surgical management and pre-operative urodynamics in female medicare beneficiaries with mixed incontinence.

    PubMed

    Chughtai, Bilal; Hauser, Nicholas; Anger, Jennifer; Asfaw, Tirsit; Laor, Leanna; Mao, Jialin; Lee, Richard; Te, Alexis; Kaplan, Steven; Sedrakyan, Art

    2017-02-01

    We sought to examine the surgical trends and utilization of treatment for mixed urinary incontinence among female Medicare beneficiaries. Data was obtained from a 5% national random sample of outpatient and carrier claims from 2000 to 2011. Included were female patients 65 and older, diagnosed with mixed urinary incontinence, who underwent surgical treatment identified by Current Procedural Terminology, Fourth Edition (CPT-4) codes. Urodynamics (UDS) before initial and secondary procedure were also identified using CPT-4 codes. Procedural trends and utilization of UDS were analyzed. Utilization of UDS increased during the study period, from 38.4% to 74.0% prior to initial surgical intervention, and from 28.6% to 62.5% preceding re-intervention. Sling surgery (63.0%) and injectable bulking agents (28.0%) were the most common surgical treatments adopted, followed by sacral nerve stimulation (SNS) (4.8%) and Burch (4.0%) procedures. Re-intervention was performed in 4.0% of patients initially treated with sling procedures and 21.3% of patients treated with bulking agents, the majority of whom (51.7% and 76.3%, respectively) underwent injection of a bulking agent. Risk of re-intervention was not different among those who did or did not receive urodynamic tests prior to the initial procedure (8.5% vs. 9.3%) CONCLUSIONS: Sling and bulk agents are the most common treatment for MUI. Preoperative urodynamic testing was not related to risk of re-intervention following surgery for mixed urinary incontinence in this cohort. Neurourol. Urodynam. 36:422-425, 2017. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  4. FDNS CFD Code Benchmark for RBCC Ejector Mode Operation

    NASA Technical Reports Server (NTRS)

    Holt, James B.; Ruf, Joe

    1999-01-01

    Computational Fluid Dynamics (CFD) analysis results are compared with benchmark quality test data from the Propulsion Engineering Research Center's (PERC) Rocket Based Combined Cycle (RBCC) experiments to verify fluid dynamic code and application procedures. RBCC engine flowpath development will rely on CFD applications to capture the multi-dimensional fluid dynamic interactions and to quantify their effect on the RBCC system performance. Therefore, the accuracy of these CFD codes must be determined through detailed comparisons with test data. The PERC experiments build upon the well-known 1968 rocket-ejector experiments of Odegaard and Stroup by employing advanced optical and laser based diagnostics to evaluate mixing and secondary combustion. The Finite Difference Navier Stokes (FDNS) code was used to model the fluid dynamics of the PERC RBCC ejector mode configuration. Analyses were performed for both Diffusion and Afterburning (DAB) and Simultaneous Mixing and Combustion (SMC) test conditions. Results from both the 2D and the 3D models are presented.

  5. Evaluating the Social Validity of the Early Start Denver Model: A Convergent Mixed Methods Study.

    PubMed

    Ogilvie, Emily; McCrudden, Matthew T

    2017-09-01

    An intervention has social validity to the extent that it is socially acceptable to participants and stakeholders. This pilot convergent mixed methods study evaluated parents' perceptions of the social validity of the Early Start Denver Model (ESDM), a naturalistic behavioral intervention for children with autism. It focused on whether the parents viewed (a) the ESDM goals as appropriate for their children, (b) the intervention procedures as acceptable and appropriate, and (c) whether changes in their children's behavior was practically significant. Parents of four children who participated in the ESDM completed the TARF-R questionnaire and participated in a semi-structured interview. Both data sets indicated that parents rated their experiences with the ESDM positively and rated it as socially-valid. The findings indicated that what was implemented in the intervention is complemented by how it was implemented and by whom.

  6. Thermal diffusivity and adiabatic limit temperature characterization of consolidate granular expanded perlite using the flash method

    NASA Astrophysics Data System (ADS)

    Raefat, Saad; Garoum, Mohammed; Laaroussi, Najma; Thiam, Macodou; Amarray, Khaoula

    2017-07-01

    In this work experimental investigation of apparent thermal diffusivity and adiabatic limit temperature of expanded granular perlite mixes has been made using the flash technic. Perlite granulates were sieved to produce essentially three characteristic grain sizes. The consolidated samples were manufactured by mixing controlled proportions of the plaster and water. The effect of the particle size on the diffusivity was examined. The inverse estimation of the diffusivity and the adiabatic limit temperature at the rear face as well as the heat losses coefficients were performed using several numerical global minimization procedures. The function to be minimized is the quadratic distance between the experimental temperature rise at the rear face and the analytical model derived from the one dimension heat conduction. It is shown that, for all granulometry tested, the estimated parameters lead to a good agreement between the mathematical model and experimental data.

  7. Effect of LES models on the entrainment of a passive scalar in a turbulent planar jet

    NASA Astrophysics Data System (ADS)

    Chambel Lopes, Diogo; da Silva, Carlos; Reis, Ricardo; Raman, Venkat

    2011-11-01

    Direct and large-eddy simulations (DNS/LES) of turbulent planar jets are used to study the role of subgrid-scale models in the integral characteristics of the passive scalar mixing in a jet. Specifically the effect of subgrid-scale models in the jet spreading rate and centreline passive scalar decay rates are assessed and compared. The modelling of the subgrid-scale fluxes is particularly challenging in the turbulent/nonturbulent (T/NT) region that divides the two regions in the jet flow: the outer region where the flow is irrotational and the inner region where the flow is turbulent. It has been shown that important Reynolds stresses exist near the T/NT interface and that these stresses determine in part the mixing and combustion rates in jets. The subgrid scales of motion near the T/NT interface are far from equilibrium and contain an important fraction of the total kinetic energy. Model constants used in several subgrid-scale models such as the Smagorinsky and the gradient models need to be corrected near the jet edge. The procedure used to obtain the dynamic Smagorinsky constant is not able to cope with the intermittent nature of this region.

  8. 40 CFR 63.9520 - What procedures must I use to demonstrate initial compliance?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... must determine the percent of HAP solvent discharged to the atmosphere for each mix batch according to... to the atmosphere for each mix batch, percent; Srec = Weight of HAP solvent recovered for each mix... discharged to the atmosphere for that mix batch (Pb). (7) Determine the 7-day block average percent of HAP...

  9. 40 CFR 63.9520 - What procedures must I use to demonstrate initial compliance?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... must determine the percent of HAP solvent discharged to the atmosphere for each mix batch according to... to the atmosphere for each mix batch, percent; Srec = Weight of HAP solvent recovered for each mix... discharged to the atmosphere for that mix batch (Pb). (7) Determine the 7-day block average percent of HAP...

  10. 40 CFR 63.9520 - What procedures must I use to demonstrate initial compliance?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... must determine the percent of HAP solvent discharged to the atmosphere for each mix batch according to... to the atmosphere for each mix batch, percent; Srec = Weight of HAP solvent recovered for each mix... discharged to the atmosphere for that mix batch (Pb). (7) Determine the 7-day block average percent of HAP...

  11. 40 CFR 63.9520 - What procedures must I use to demonstrate initial compliance?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... must determine the percent of HAP solvent discharged to the atmosphere for each mix batch according to... to the atmosphere for each mix batch, percent; Srec = Weight of HAP solvent recovered for each mix... discharged to the atmosphere for that mix batch (Pb). (7) Determine the 7-day block average percent of HAP...

  12. A Case Study of a Mixed Methods Study Engaged in Integrated Data Analysis

    ERIC Educational Resources Information Center

    Schiazza, Daniela Marie

    2013-01-01

    The nascent field of mixed methods research has yet to develop a cohesive framework of guidelines and procedures for mixed methods data analysis (Greene, 2008). To support the field's development of analytical frameworks, this case study reflects on the development and implementation of a mixed methods study engaged in integrated data analysis.…

  13. Impact of Patient and Procedure Mix on Finances of Perinatal Centres – Theoretical Models for Economic Strategies in Perinatal Centres

    PubMed Central

    Hildebrandt, T.; Kraml, F.; Wagner, S.; Hack, C. C.; Thiel, F. C.; Kehl, S.; Winkler, M.; Frobenius, W.; Faschingbauer, F.; Beckmann, M. W.; Lux, M. P.

    2013-01-01

    Introduction: In Germany, cost and revenue structures of hospitals with defined treatment priorities are currently being discussed to identify uneconomic services. This discussion has also affected perinatal centres (PNCs) and represents a new economic challenge for PNCs. In addition to optimising the time spent in hospital, the hospital management needs to define the “best” patient mix based on costs and revenues. Method: Different theoretical models were proposed based on the cost and revenue structures of the University Perinatal Centre for Franconia (UPF). Multi-step marginal costing was then used to show the impact on operating profits of changes in services and bed occupancy rates. The current contribution margin accounting used by the UPF served as the basis for the calculations. The models demonstrated the impact of changes in services on costs and revenues of a level 1 PNC. Results: Contribution margin analysis was used to calculate profitable and unprofitable DRGs based on average inpatient cost per day. Nineteen theoretical models were created. The current direct costing used by the UPF and a theoretical model with a 100 % bed occupancy rate were used as reference models. Significantly higher operating profits could be achieved by doubling the number of profitable DRGs and halving the number of less profitable DRGs. Operating profits could be increased even more by changing the rates of profitable DRGs per bed occupancy. The exclusive specialisation on pathological and high-risk pregnancies resulted in operating losses. All models which increased the numbers of caesarean sections or focused exclusively on c-sections resulted in operating losses. Conclusion: These theoretical models offer a basis for economic planning. They illustrate the enormous impact potential changes can have on the operating profits of PNCs. Level 1 PNCs require high bed occupancy rates and a profitable patient mix to cover the extremely high costs incurred due to the services they are legally required to offer. Based on our theoretical models it must be stated that spontaneous vaginal births (not caesarean sections) were the most profitable procedures in the current DRG system. Overall, it currently makes economic sense for level I PNCs to treat as many low-risk pregnancies and neonates as possible to cover costs. PMID:24771932

  14. Comparison of measurement methods with a mixed effects procedure accounting for replicated evaluations (COM3PARE): method comparison algorithm implementation for head and neck IGRT positional verification.

    PubMed

    Roy, Anuradha; Fuller, Clifton D; Rosenthal, David I; Thomas, Charles R

    2015-08-28

    Comparison of imaging measurement devices in the absence of a gold-standard comparator remains a vexing problem; especially in scenarios where multiple, non-paired, replicated measurements occur, as in image-guided radiotherapy (IGRT). As the number of commercially available IGRT presents a challenge to determine whether different IGRT methods may be used interchangeably, an unmet need conceptually parsimonious and statistically robust method to evaluate the agreement between two methods with replicated observations. Consequently, we sought to determine, using an previously reported head and neck positional verification dataset, the feasibility and utility of a Comparison of Measurement Methods with the Mixed Effects Procedure Accounting for Replicated Evaluations (COM3PARE), a unified conceptual schema and analytic algorithm based upon Roy's linear mixed effects (LME) model with Kronecker product covariance structure in a doubly multivariate set-up, for IGRT method comparison. An anonymized dataset consisting of 100 paired coordinate (X/ measurements from a sequential series of head and neck cancer patients imaged near-simultaneously with cone beam CT (CBCT) and kilovoltage X-ray (KVX) imaging was used for model implementation. Software-suggested CBCT and KVX shifts for the lateral (X), vertical (Y) and longitudinal (Z) dimensions were evaluated for bias, inter-method (between-subject variation), intra-method (within-subject variation), and overall agreement using with a script implementing COM3PARE with the MIXED procedure of the statistical software package SAS (SAS Institute, Cary, NC, USA). COM3PARE showed statistically significant bias agreement and difference in inter-method between CBCT and KVX was observed in the Z-axis (both p - value<0.01). Intra-method and overall agreement differences were noted as statistically significant for both the X- and Z-axes (all p - value<0.01). Using pre-specified criteria, based on intra-method agreement, CBCT was deemed preferable for X-axis positional verification, with KVX preferred for superoinferior alignment. The COM3PARE methodology was validated as feasible and useful in this pilot head and neck cancer positional verification dataset. COM3PARE represents a flexible and robust standardized analytic methodology for IGRT comparison. The implemented SAS script is included to encourage other groups to implement COM3PARE in other anatomic sites or IGRT platforms.

  15. MnDOT thin whitetopping selection procedures : final report.

    DOT National Transportation Integrated Search

    2017-06-01

    This report provides an integrated selection procedure for evaluating whether an existing hot-mix asphalt (HMA) pavement is an appropriate candidate for a bonded concrete overlay of asphalt (BCOA). The selection procedure includes (1) a desk review, ...

  16. Integrated Data Collection Analysis (IDCA) Program - Mixing Procedures and Materials Compatibility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olinger, Becky D.; Sandstrom, Mary M.; Warner, Kirstin F.

    Three mixing procedures have been standardized for the IDCA proficiency test—solid-solid, solid-liquid, and liquid-liquid. Due to the variety of precursors used in formulating the materials for the test, these three mixing methods have been designed to address all combinations of materials. Hand mixing is recommended for quantities less than 10 grams and Jar Mill mixing is recommended for quantities over 10 grams. Consideration must also be given to the type of container used for the mixing due to the wide range of chemical reactivity of the precursors and mixtures. Eight web site sources from container and chemical manufacturers have beenmore » consulted. Compatible materials have been compiled as a resource for selecting containers made of materials stable to the mixtures. In addition, container materials used in practice by the participating laboratories are discussed. Consulting chemical compatibility tables is highly recommended for each operation by each individual engaged in testing the materials in this proficiency test.« less

  17. A scoring system for appraising mixed methods research, and concomitantly appraising qualitative, quantitative and mixed methods primary studies in Mixed Studies Reviews.

    PubMed

    Pluye, Pierre; Gagnon, Marie-Pierre; Griffiths, Frances; Johnson-Lafleur, Janique

    2009-04-01

    A new form of literature review has emerged, Mixed Studies Review (MSR). These reviews include qualitative, quantitative and mixed methods studies. In the present paper, we examine MSRs in health sciences, and provide guidance on processes that should be included and reported. However, there are no valid and usable criteria for concomitantly appraising the methodological quality of the qualitative, quantitative and mixed methods studies. To propose criteria for concomitantly appraising the methodological quality of qualitative, quantitative and mixed methods studies or study components. A three-step critical review was conducted. 2322 references were identified in MEDLINE, and their titles and abstracts were screened; 149 potentially relevant references were selected and the full-text papers were examined; 59 MSRs were retained and scrutinized using a deductive-inductive qualitative thematic data analysis. This revealed three types of MSR: convenience, reproducible, and systematic. Guided by a proposal, we conducted a qualitative thematic data analysis of the quality appraisal procedures used in the 17 systematic MSRs (SMSRs). Of 17 SMSRs, 12 showed clear quality appraisal procedures with explicit criteria but no SMSR used valid checklists to concomitantly appraise qualitative, quantitative and mixed methods studies. In two SMSRs, criteria were developed following a specific procedure. Checklists usually contained more criteria than needed. In four SMSRs, a reliability assessment was described or mentioned. While criteria for quality appraisal were usually based on descriptors that require specific methodological expertise (e.g., appropriateness), no SMSR described the fit between reviewers' expertise and appraised studies. Quality appraisal usually resulted in studies being ranked by methodological quality. A scoring system is proposed for concomitantly appraising the methodological quality of qualitative, quantitative and mixed methods studies for SMSRs. This scoring system may also be used to appraise the methodological quality of qualitative, quantitative and mixed methods components of mixed methods research.

  18. Biotechnology

    NASA Image and Video Library

    2003-05-07

    Aboard the International Space Station (ISS), the Tissue Culture Module (TCM) is the stationary bioreactor vessel in which cell cultures grow. However, for the Cellular Biotechnology Operations Support Systems-Fluid Dynamics Investigation (CBOSS-FDI), color polystyrene beads are used to measure the effectiveness of various mixing procedures. Uniform mixing is a crucial component of CBOSS experiments involving the immune response of human lymphoid cell suspensions. In this picture, the beads are trapped in the injection port shortly after injection. Swirls of beads indicate, event to the naked eye, the contents of the TCM are not fully mixed. The beads are similar in size and density to human lymphoid cells. The goal is to develop procedures that are both convenient for the flight crew and are optimal in providing uniform and reproducible mixing of all components, including cells. The average bead density in a well mixed TCM will be uniform, with no bubbles, and it will be measured using the absorption of light

  19. Cellular Biotechnology Operations Support Systems-Fluid Dynamics Investigation (CBOSS-FDI)

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Aboard the International Space Station (ISS), the Tissue Culture Module (TCM) is the stationary bioreactor vessel in which cell cultures grow. However, for the Cellular Biotechnology Operations Support Systems-Fluid Dynamics Investigation (CBOSS-FDI), color polystyrene beads are used to measure the effectiveness of various mixing procedures. The beads are similar in size and density to human lymphoid cells. Uniform mixing is a crucial component of CBOSS experiments involving the immune response of human lymphoid cell suspensions. The goal is to develop procedures that are both convenient for the flight crew and are optimal in providing uniform and reproducible mixing of all components, including cells. The average bead density in a well mixed TCM will be uniform, with no bubbles, and it will be measured using the absorption of light. In this photograph, a TCM is shown after mixing protocols, and bubbles of various sizes can be seen.

  20. Operational Control Procedures for the Activated Sludge Process, Part III-A: Calculation Procedures.

    ERIC Educational Resources Information Center

    West, Alfred W.

    This is the second in a series of documents developed by the National Training and Operational Technology Center describing operational control procedures for the activated sludge process used in wastewater treatment. This document deals exclusively with the calculation procedures, including simplified mixing formulas, aeration tank…

  1. Estimating the variance for heterogeneity in arm-based network meta-analysis.

    PubMed

    Piepho, Hans-Peter; Madden, Laurence V; Roger, James; Payne, Roger; Williams, Emlyn R

    2018-04-19

    Network meta-analysis can be implemented by using arm-based or contrast-based models. Here we focus on arm-based models and fit them using generalized linear mixed model procedures. Full maximum likelihood (ML) estimation leads to biased trial-by-treatment interaction variance estimates for heterogeneity. Thus, our objective is to investigate alternative approaches to variance estimation that reduce bias compared with full ML. Specifically, we use penalized quasi-likelihood/pseudo-likelihood and hierarchical (h) likelihood approaches. In addition, we consider a novel model modification that yields estimators akin to the residual maximum likelihood estimator for linear mixed models. The proposed methods are compared by simulation, and 2 real datasets are used for illustration. Simulations show that penalized quasi-likelihood/pseudo-likelihood and h-likelihood reduce bias and yield satisfactory coverage rates. Sum-to-zero restriction and baseline contrasts for random trial-by-treatment interaction effects, as well as a residual ML-like adjustment, also reduce bias compared with an unconstrained model when ML is used, but coverage rates are not quite as good. Penalized quasi-likelihood/pseudo-likelihood and h-likelihood are therefore recommended. Copyright © 2018 John Wiley & Sons, Ltd.

  2. Application of linear mixed-effects model with LASSO to identify metal components associated with cardiac autonomic responses among welders: a repeated measures study

    PubMed Central

    Zhang, Jinming; Cavallari, Jennifer M; Fang, Shona C; Weisskopf, Marc G; Lin, Xihong; Mittleman, Murray A; Christiani, David C

    2017-01-01

    Background Environmental and occupational exposure to metals is ubiquitous worldwide, and understanding the hazardous metal components in this complex mixture is essential for environmental and occupational regulations. Objective To identify hazardous components from metal mixtures that are associated with alterations in cardiac autonomic responses. Methods Urinary concentrations of 16 types of metals were examined and ‘acceleration capacity’ (AC) and ‘deceleration capacity’ (DC), indicators of cardiac autonomic effects, were quantified from ECG recordings among 54 welders. We fitted linear mixed-effects models with least absolute shrinkage and selection operator (LASSO) to identify metal components that are associated with AC and DC. The Bayesian Information Criterion was used as the criterion for model selection procedures. Results Mercury and chromium were selected for DC analysis, whereas mercury, chromium and manganese were selected for AC analysis through the LASSO approach. When we fitted the linear mixed-effects models with ‘selected’ metal components only, the effect of mercury remained significant. Every 1 µg/L increase in urinary mercury was associated with −0.58 ms (−1.03, –0.13) changes in DC and 0.67 ms (0.25, 1.10) changes in AC. Conclusion Our study suggests that exposure to several metals is associated with impaired cardiac autonomic functions. Our findings should be replicated in future studies with larger sample sizes. PMID:28663305

  3. Use of NDT equipment for construction quality control of hot mix asphalt pavements

    DOT National Transportation Integrated Search

    2006-08-01

    The focus of the study has been to evaluate the utility of seismic methods in the quality management of the hot mix asphalt layers. Procedures are presented to measure the target field moduli of hot mix asphalt (HMA) with laboratory seismic methods, ...

  4. New generation mix-designs : laboratory testing and construction of the APT test sections.

    DOT National Transportation Integrated Search

    2010-03-01

    Recent changes to the Texas HMA mix-design procedures such as adaption of the higher PG asphalt-binder grades and the Hamburg test have ensured that the mixes routinely used on the Texas highways are not prone to rutting. However, performance concern...

  5. Are minimally invasive procedures harder to acquire than conventional surgical procedures?

    PubMed

    Hiemstra, Ellen; Kolkman, Wendela; le Cessie, Saskia; Jansen, Frank Willem

    2011-01-01

    It is frequently suggested that minimally invasive surgery (MIS) is harder to acquire than conventional surgery. To test this hypothesis, residents' learning curves of both surgical skills are compared. Residents had to be assessed using a general global rating scale of the OSATS (Objective Structured Assessment of Technical Skills) for every procedure they performed as primary surgeon during a 3-month clinical rotation in gynecological surgery. Nine postgraduate-year-4 residents collected a total of 319 OSATS during the 2 years and 3 months investigation period. These assessments concerned 129 MIS (laparoscopic and hysteroscopic) and 190 conventional (open abdominal and vaginal) procedures. Learning curves (in this study defined as OSATS score plotted against procedure-specific caseload) for MIS and conventional surgery were compared using a linear mixed model. The MIS curve revealed to be steeper than the conventional curve (1.77 vs. 0.75 OSATS points per assessed procedure; 95% CI 1.19-2.35 vs. 0.15-1.35, p < 0.01). Basic MIS procedures do not seem harder to acquire during residency than conventional surgical procedures. This may have resulted from the incorporation of structured MIS training programs in residency. Hopefully, this will lead to a more successful implementation of the advanced MIS procedures. Copyright © 2010 S. Karger AG, Basel.

  6. Improved silicon carbide for advanced heat engines

    NASA Technical Reports Server (NTRS)

    Whalen, Thomas J.; Mangels, J. A.

    1986-01-01

    The development of silicon carbide materials of high strength was initiated and components of complex shape and high reliability were formed. The approach was to adapt a beta-SiC powder and binder system to the injection molding process and to develop procedures and process parameters capable of providing a sintered silicon carbide material with improved properties. The initial effort was to characterize the baseline precursor materials, develop mixing and injection molding procedures for fabricating test bars, and characterize the properties of the sintered materials. Parallel studies of various mixing, dewaxing, and sintering procedures were performed in order to distinguish process routes for improving material properties. A total of 276 modulus-of-rupture (MOR) bars of the baseline material was molded, and 122 bars were fully processed to a sinter density of approximately 95 percent. Fluid mixing techniques were developed which significantly reduced flaw size and improved the strength of the material. Initial MOR tests indicated that strength of the fluid-mixed material exceeds the baseline property by more than 33 percent. the baseline property by more than 33 percent.

  7. Risk adjustment models for short-term outcomes after surgical resection for oesophagogastric cancer.

    PubMed

    Fischer, C; Lingsma, H; Hardwick, R; Cromwell, D A; Steyerberg, E; Groene, O

    2016-01-01

    Outcomes for oesophagogastric cancer surgery are compared with the aim of benchmarking quality of care. Adjusting for patient characteristics is crucial to avoid biased comparisons between providers. The study objective was to develop a case-mix adjustment model for comparing 30- and 90-day mortality and anastomotic leakage rates after oesophagogastric cancer resections. The study reviewed existing models, considered expert opinion and examined audit data in order to select predictors that were consequently used to develop a case-mix adjustment model for the National Oesophago-Gastric Cancer Audit, covering England and Wales. Models were developed on patients undergoing surgical resection between April 2011 and March 2013 using logistic regression. Model calibration and discrimination was quantified using a bootstrap procedure. Most existing risk models for oesophagogastric resections were methodologically weak, outdated or based on detailed laboratory data that are not generally available. In 4882 patients with oesophagogastric cancer used for model development, 30- and 90-day mortality rates were 2·3 and 4·4 per cent respectively, and 6·2 per cent of patients developed an anastomotic leak. The internally validated models, based on predictors selected from the literature, showed moderate discrimination (area under the receiver operating characteristic (ROC) curve 0·646 for 30-day mortality, 0·664 for 90-day mortality and 0·587 for anastomotic leakage) and good calibration. Based on available data, three case-mix adjustment models for postoperative outcomes in patients undergoing curative surgery for oesophagogastric cancer were developed. These models should be used for risk adjustment when assessing hospital performance in the National Health Service, and tested in other large health systems. © 2015 BJS Society Ltd Published by John Wiley & Sons Ltd.

  8. Effect of mixing techniques on bacterial attachment and disinfection time of polyether impression material

    PubMed Central

    Guler, Umut; Budak, Yasemin; Ruh, Emrah; Ocal, Yesim; Canay, Senay; Akyon, Yakut

    2013-01-01

    Objective: The aim of this study was 2-fold. The first aim was to evaluate the effects of mixing technique (hand-mixing or auto-mixing) on bacterial attachment to polyether impression materials. The second aim was to determine whether bacterial attachment to these materials was affected by length of exposure to disinfection solutions. Materials and Methods: Polyether impression material samples (n = 144) were prepared by hand-mixing or auto-mixing. Escherichia coli, Staphylococcus aureus and Pseudomonas aeruginosa were used in testing. After incubation, the bacterial colonies were counted and then disinfectant solution was applied. The effect of disinfection solution was evaluated just after the polymerization of impression material and 30 min after polymerization. Differences in adherence of bacteria to the samples prepared by hand-mixing and to those prepared by auto-mixing were assessed by Kruskal-Wallis and Mann-Whitney U-tests. For evaluating the efficiency of the disinfectant, Kruskal-Wallis multiple comparisons test was used. Results: E. coli counts were higher in hand-mixed materials (P < 0.05); no other statistically significant differences were found between hand- and auto-mixed materials. According to the Kruskal-Wallis test, significant differences were found between the disinfection procedures (Z > 2.394). Conclusion: The methods used for mixing polyether impression material did not affect bacterial attachment to impression surfaces. In contrast, the disinfection procedure greatly affects decontamination of the impression surface. PMID:24966729

  9. Effect of mixing techniques on bacterial attachment and disinfection time of polyether impression material.

    PubMed

    Guler, Umut; Budak, Yasemin; Ruh, Emrah; Ocal, Yesim; Canay, Senay; Akyon, Yakut

    2013-09-01

    The aim of this study was 2-fold. The first aim was to evaluate the effects of mixing technique (hand-mixing or auto-mixing) on bacterial attachment to polyether impression materials. The second aim was to determine whether bacterial attachment to these materials was affected by length of exposure to disinfection solutions. Polyether impression material samples (n = 144) were prepared by hand-mixing or auto-mixing. Escherichia coli, Staphylococcus aureus and Pseudomonas aeruginosa were used in testing. After incubation, the bacterial colonies were counted and then disinfectant solution was applied. The effect of disinfection solution was evaluated just after the polymerization of impression material and 30 min after polymerization. Differences in adherence of bacteria to the samples prepared by hand-mixing and to those prepared by auto-mixing were assessed by Kruskal-Wallis and Mann-Whitney U-tests. For evaluating the efficiency of the disinfectant, Kruskal-Wallis multiple comparisons test was used. E. coli counts were higher in hand-mixed materials (P < 0.05); no other statistically significant differences were found between hand- and auto-mixed materials. According to the Kruskal-Wallis test, significant differences were found between the disinfection procedures (Z > 2.394). The methods used for mixing polyether impression material did not affect bacterial attachment to impression surfaces. In contrast, the disinfection procedure greatly affects decontamination of the impression surface.

  10. Drug residues recovered in feed after various feedlot mixer truck cleanout procedures.

    PubMed

    Van Donkersgoed, Joyce; Sit, Dan; Gibbons, Nicole; Ramogida, Caterina; Hendrick, Steve

    2010-01-01

    A study was conducted to determine the effectiveness of two methods of equipment cleanout, sequencing or flushing, for reducing drug carryover in feedlot mixer trucks. Feed samples were collected from total mixed rations before and after various feed mixer equipment cleanout procedures. Medicated rations contained either 11 ppm of tylosin or 166 or 331 ppm of chlortetracycline. There were no differences between sequencing and flushing or between flushing with dry barley and flushing with barley silage in the median proportion of drug recovered in the next ration. A larger drug reduction was achieved using flush material at a volume of 10 versus 5% of the mixer capacity and mixing the flush material for 3 versus 4 min. Regardless of the drug or prescription concentrations in the total mixed rations or the equipment cleanout procedure used, concentrations of chlortetracycline and tylosin recovered were very low.

  11. Establishment of QC/QA procedures for open-graded mixes : final report.

    DOT National Transportation Integrated Search

    1998-09-01

    The State of Oregon has employed the use of porous concrete surfaces (E- and F-mixes) since the 1970s. The use of porous mixes has increased substantially in the past five years. Previously, no work had been done to evaluate whether the quality contr...

  12. Individualizing drug dosage with longitudinal data.

    PubMed

    Zhu, Xiaolu; Qu, Annie

    2016-10-30

    We propose a two-step procedure to personalize drug dosage over time under the framework of a log-linear mixed-effect model. We model patients' heterogeneity using subject-specific random effects, which are treated as the realizations of an unspecified stochastic process. We extend the conditional quadratic inference function to estimate both fixed-effect coefficients and individual random effects on a longitudinal training data sample in the first step and propose an adaptive procedure to estimate new patients' random effects and provide dosage recommendations for new patients in the second step. An advantage of our approach is that we do not impose any distribution assumption on estimating random effects. Moreover, the new approach can accommodate more general time-varying covariates corresponding to random effects. We show in theory and numerical studies that the proposed method is more efficient compared with existing approaches, especially when covariates are time varying. In addition, a real data example of a clozapine study confirms that our two-step procedure leads to more accurate drug dosage recommendations. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  13. A Lagrangian Transport Eulerian Reaction Spatial (LATERS) Markov Model for Prediction of Effective Bimolecular Reactive Transport

    NASA Astrophysics Data System (ADS)

    Sund, Nicole; Porta, Giovanni; Bolster, Diogo; Parashar, Rishi

    2017-11-01

    Prediction of effective transport for mixing-driven reactive systems at larger scales, requires accurate representation of mixing at small scales, which poses a significant upscaling challenge. Depending on the problem at hand, there can be benefits to using a Lagrangian framework, while in others an Eulerian might have advantages. Here we propose and test a novel hybrid model which attempts to leverage benefits of each. Specifically, our framework provides a Lagrangian closure required for a volume-averaging procedure of the advection diffusion reaction equation. This hybrid model is a LAgrangian Transport Eulerian Reaction Spatial Markov model (LATERS Markov model), which extends previous implementations of the Lagrangian Spatial Markov model and maps concentrations to an Eulerian grid to quantify closure terms required to calculate the volume-averaged reaction terms. The advantage of this approach is that the Spatial Markov model is known to provide accurate predictions of transport, particularly at preasymptotic early times, when assumptions required by traditional volume-averaging closures are least likely to hold; likewise, the Eulerian reaction method is efficient, because it does not require calculation of distances between particles. This manuscript introduces the LATERS Markov model and demonstrates by example its ability to accurately predict bimolecular reactive transport in a simple benchmark 2-D porous medium.

  14. Identification of Hospital Outliers in Bleeding Complications After Percutaneous Coronary Intervention

    PubMed Central

    Hess, Connie N.; Rao, Sunil V.; McCoy, Lisa A.; Neely, Megan L.; Singh, Mandeep; Spertus, John A.; Krone, Ronald J.; Weaver, W. Douglas; Peterson, Eric D.

    2014-01-01

    Background Post-percutaneous coronary intervention (PCI) bleeding complications are an important quality metric. We sought to characterize site-level variation in post-PCI bleeding and explore the influence of patient and procedural factors on hospital bleeding performance. Methods and Results Hospital-level bleeding performance was compared pre- and post-adjustment using the newly-revised CathPCI Registry® bleeding risk model (c-index 0.77) among 1,292 NCDR® hospitals performing >50 PCIs from 7/2009–9/2012 (n=1,984,998 procedures). Using random effects models, outlier sites were identified based on 95% confidence intervals around the hospital’s random intercept. Bleeding 72 hours post-PCI was defined as: arterial access site, retroperitoneal, gastrointestinal, or genitourinary bleeding; intracranial hemorrhage; cardiac tamponade; non-bypass surgery-related blood transfusion with pre-procedure hemoglobin ≥8 g/dl; or absolute decrease in hemoglobin value ≥3g/dl with pre-procedure hemoglobin ≤16 g/dl. Overall, the median unadjusted post-PCI bleeding rate was 5.2% and varied among hospitals from 2.6%–10.4% (5th, 95th percentiles). Center-level bleeding variation persisted after case-mix adjustment (2.8%–9.5%; 5th, 95th percentiles). While hospitals’ observed and risk-adjusted bleeding ranks were correlated (Spearman’s rho 0.88), individual rankings shifted after risk-adjustment (median Δ rank order ± 91.5; IQR 37.0, 185.5). Outlier classification changed post-adjustment for 29.3%, 16.1%, and 26.5% of low-, non-, and high-outlier sites, respectively. Hospital use of bleeding avoidance strategies (bivalirudin, radial access, or vascular closure device) was associated with risk-adjusted bleeding rates. Conclusions Despite adjustment for patient case-mix, there is wide variation in rates of hospital PCI-related bleeding in the United States. Opportunities may exist for best performers to share practices with other sites. PMID:25424242

  15. Downscaling reanalysis data to high-resolution variables above a glacier surface (Cordillera Blanca, Peru)

    NASA Astrophysics Data System (ADS)

    Hofer, Marlis; Mölg, Thomas; Marzeion, Ben; Kaser, Georg

    2010-05-01

    Recently initiated observation networks in the Cordillera Blanca provide temporally high-resolution, yet short-term atmospheric data. The aim of this study is to extend the existing time series into the past. We present an empirical-statistical downscaling (ESD) model that links 6-hourly NCEP/NCAR reanalysis data to the local target variables, measured at the tropical glacier Artesonraju (Northern Cordillera Blanca). The approach is particular in the context of ESD for two reasons. First, the observational time series for model calibration are short (only about two years). Second, unlike most ESD studies in climate research, we focus on variables at a high temporal resolution (i.e., six-hourly values). Our target variables are two important drivers in the surface energy balance of tropical glaciers; air temperature and specific humidity. The selection of predictor fields from the reanalysis data is based on regression analyses and climatologic considerations. The ESD modelling procedure includes combined empirical orthogonal function and multiple regression analyses. Principal component screening is based on cross-validation using the Akaike Information Criterion as model selection criterion. Double cross-validation is applied for model evaluation. Potential autocorrelation in the time series is considered by defining the block length in the resampling procedure. Apart from the selection of predictor fields, the modelling procedure is automated and does not include subjective choices. We assess the ESD model sensitivity to the predictor choice by using both single- and mixed-field predictors of the variables air temperature (1000 hPa), specific humidity (1000 hPa), and zonal wind speed (500 hPa). The chosen downscaling domain ranges from 80 to 50 degrees west and from 0 to 20 degrees south. Statistical transfer functions are derived individually for different months and times of day (month/hour-models). The forecast skill of the month/hour-models largely depends on month and time of day, ranging from 0 to 0.8, but the mixed-field predictors generally perform better than the single-field predictors. At all time scales, the ESD model shows added value against two simple reference models; (i) the direct use of reanalysis grid point values, and (ii) mean diurnal and seasonal cycles over the calibration period. The ESD model forecast 1960 to 2008 clearly reflects interannual variability related to the El Niño/Southern Oscillation, but is sensitive to the chosen predictor type. So far, we have not assessed the performance of NCEP/NCAR reanalysis data against other reanalysis products. The developed ESD model is computationally cheap and applicable wherever measurements are available for model calibration.

  16. Computation of high Reynolds number internal/external flows

    NASA Technical Reports Server (NTRS)

    Cline, M. C.; Wilmoth, R. G.

    1981-01-01

    A general, user oriented computer program, called VNAP2, has been developed to calculate high Reynolds number, internal/external flows. VNAP2 solves the two-dimensional, time-dependent Navier-Stokes equations. The turbulence is modeled with either a mixing-length, a one transport equation, or a two transport equation model. Interior grid points are computed using the explicit MacCormack scheme with special procedures to speed up the calculation in the fine grid. All boundary conditions are calculated using a reference plane characteristic scheme with the viscous terms treated as source terms. Several internal, and internal/external flow calculations are presented.

  17. Computation of high Reynolds number internal/external flows

    NASA Technical Reports Server (NTRS)

    Cline, M. C.; Wilmoth, R. G.

    1981-01-01

    A general, user oriented computer program, called VNAP2, was developed to calculate high Reynolds number, internal/ external flows. The VNAP2 program solves the two dimensional, time dependent Navier-Stokes equations. The turbulence is modeled with either a mixing-length, a one transport equation, or a two transport equation model. Interior grid points are computed using the explicit MacCormack Scheme with special procedures to speed up the calculation in the fine grid. All boundary conditions are calculated using a reference plane characteristic scheme with the viscous terms treated as source terms. Several internal, external, and internal/external flow calculations are presented.

  18. Computation of high Reynolds number internal/external flows

    NASA Technical Reports Server (NTRS)

    Cline, M. C.; Wilmoth, R. G.

    1981-01-01

    A general, user oriented computer program, called VNAF2, developed to calculate high Reynolds number internal/external flows is described. The program solves the two dimensional, time dependent Navier-Stokes equations. Turbulence is modeled with either a mixing length, a one transport equation, or a two transport equation model. Interior grid points are computed using the explicit MacCormack scheme with special procedures to speed up the calculation in the fine grid. All boundary conditions are calculated using a reference plane characteristic scheme with the viscous terms treated as source terms. Several internal, external, and internal/external flow calculations are presented.

  19. Direct and indirect effects of birth order on personality and identity: support for the null hypothesis.

    PubMed

    Dunkel, Curtis S; Harbke, Colin R; Papini, Dennis R

    2009-06-01

    The authors proposed that birth order affects psychosocial outcomes through differential investment from parent to child and differences in the degree of identification from child to parent. The authors conducted this study to test these 2 models. Despite the use of statistical and methodological procedures to increase sensitivity and reduce error, the authors did not find support for the models. They discuss results in the context of the mixed-research findings regarding birth order and suggest further research on the proposed developmental dynamics that may produce birth-order effects.

  20. Turbofan forced mixer-nozzle internal flowfield. Volume 1: A benchmark experimental study

    NASA Technical Reports Server (NTRS)

    Paterson, R. W.

    1982-01-01

    An experimental investigation of the flow field within a model turbofan forced mixer nozzle is described. Velocity and thermodynamic state variable data for use in assessing the accuracy and assisting the further development of computational procedures for predicting the flow field within mixer nozzles are provided. Velocity and temperature data suggested that the nozzle mixing process was dominated by circulations (secondary flows) of a length scale on the order the lobe dimensions which were associated with strong radial velocities observed near the lobe exit plane. The 'benchmark' model mixer experiment conducted for code assessment purposes is discussed.

  1. Using empirical Bayes predictors from generalized linear mixed models to test and visualize associations among longitudinal outcomes.

    PubMed

    Mikulich-Gilbertson, Susan K; Wagner, Brandie D; Grunwald, Gary K; Riggs, Paula D; Zerbe, Gary O

    2018-01-01

    Medical research is often designed to investigate changes in a collection of response variables that are measured repeatedly on the same subjects. The multivariate generalized linear mixed model (MGLMM) can be used to evaluate random coefficient associations (e.g. simple correlations, partial regression coefficients) among outcomes that may be non-normal and differently distributed by specifying a multivariate normal distribution for their random effects and then evaluating the latent relationship between them. Empirical Bayes predictors are readily available for each subject from any mixed model and are observable and hence, plotable. Here, we evaluate whether second-stage association analyses of empirical Bayes predictors from a MGLMM, provide a good approximation and visual representation of these latent association analyses using medical examples and simulations. Additionally, we compare these results with association analyses of empirical Bayes predictors generated from separate mixed models for each outcome, a procedure that could circumvent computational problems that arise when the dimension of the joint covariance matrix of random effects is large and prohibits estimation of latent associations. As has been shown in other analytic contexts, the p-values for all second-stage coefficients that were determined by naively assuming normality of empirical Bayes predictors provide a good approximation to p-values determined via permutation analysis. Analyzing outcomes that are interrelated with separate models in the first stage and then associating the resulting empirical Bayes predictors in a second stage results in different mean and covariance parameter estimates from the maximum likelihood estimates generated by a MGLMM. The potential for erroneous inference from using results from these separate models increases as the magnitude of the association among the outcomes increases. Thus if computable, scatterplots of the conditionally independent empirical Bayes predictors from a MGLMM are always preferable to scatterplots of empirical Bayes predictors generated by separate models, unless the true association between outcomes is zero.

  2. The Society of Thoracic Surgeons Congenital Heart Surgery Database Mortality Risk Model: Part 1—Statistical Methodology

    PubMed Central

    O’Brien, Sean M.; Jacobs, Jeffrey P.; Pasquali, Sara K.; Gaynor, J. William; Karamlou, Tara; Welke, Karl F.; Filardo, Giovanni; Han, Jane M.; Kim, Sunghee; Shahian, David M.; Jacobs, Marshall L.

    2016-01-01

    Background This study’s objective was to develop a risk model incorporating procedure type and patient factors to be used for case-mix adjustment in the analysis of hospital-specific operative mortality rates after congenital cardiac operations. Methods Included were patients of all ages undergoing cardiac operations, with or without cardiopulmonary bypass, at centers participating in The Society of Thoracic Surgeons Congenital Heart Surgery Database during January 1, 2010, to December 31, 2013. Excluded were isolated patent ductus arteriosus closures in patients weighing less than or equal to 2.5 kg, centers with more than 10% missing data, and patients with missing data for key variables. Data from the first 3.5 years were used for model development, and data from the last 0.5 year were used for assessing model discrimination and calibration. Potential risk factors were proposed based on expert consensus and selected after empirically comparing a variety of modeling options. Results The study cohort included 52,224 patients from 86 centers with 1,931 deaths (3.7%). Covariates included in the model were primary procedure, age, weight, and 11 additional patient factors reflecting acuity status and comorbidities. The C statistic in the validation sample was 0.858. Plots of observed-vs-expected mortality rates revealed good calibration overall and within subgroups, except for a slight overestimation of risk in the highest decile of predicted risk. Removing patient preoperative factors from the model reduced the C statistic to 0.831 and affected the performance classification for 12 of 86 hospitals. Conclusions The risk model is well suited to adjust for case mix in the analysis and reporting of hospital-specific mortality for congenital heart operations. Inclusion of patient factors added useful discriminatory power and reduced bias in the calculation of hospital-specific mortality metrics. PMID:26245502

  3. Effect of academic status on outcomes of surgery for rectal cancer.

    PubMed

    Cagino, Kristen; Altieri, Maria S; Yang, Jie; Nie, Lizhou; Talamini, Mark; Spaniolas, Konstantinos; Denoya, Paula; Pryor, Aurora

    2018-06-01

    The purpose of our study was to investigate surgical outcomes following advanced colorectal procedures at academic versus community institutions. The SPARCS database was used to identify patients undergoing Abdominoperineal resection (APR) and Low Anterior Resection between 2009 and 2014. Linear mixed models and generalized linear mixed models were used to compare outcomes. Laparoscopic versus open procedures, surgery type, volume status, and stoma formation between academic and community facilities were compared. Higher percentages of laparoscopic surgeries (58.68 vs. 41.32%, p value < 0.0001), more APR surgeries (64.60 vs. 35.40%, p value < 0.0001), more high volume hospitals (69.46 vs. 30.54%, p value < 0.0001), and less stoma formation (48.00 vs. 52.00%, p value < 0.0001) were associated with academic centers. After adjusting for confounding factors, academic facilities were more likely to perform APR surgeries (OR 1.35, 95% CI 1.04-1.74, p value = 0.0235). Minorities and Medicaid patients were more likely to receive care at an academic facility. Stoma formation, open surgery, and APR were associated with longer LOS and higher rate of ED visit and 30-day readmission. Laparoscopy and APR are more commonly performed at academic than community facilities. Age, sex, race, and socioeconomic status affect the facility at which and the type of surgery patients receive, thereby influencing surgical outcomes.

  4. Biotechnology

    NASA Image and Video Library

    2003-05-05

    Aboard the International Space Station (ISS), the Tissue Culture Module (TCM) is the stationary bioreactor vessel in which cell cultures grow. However, for the Cellular Biotechnology Operations Support Systems-Fluid Dynamics Investigation (CBOSS-FDI), color polystyrene beads are used to measure the effectiveness of various mixing procedures. The beads are similar in size and density to human lymphoid cells. Uniform mixing is a crucial component of CBOSS experiments involving the immune response of human lymphoid cell suspensions. The goal is to develop procedures that are both convenient for the flight crew and are optimal in providing uniform and reproducible mixing of all components, including cells. The average bead density in a well mixed TCM will be uniform, with no bubbles, and it will be measured using the absorption of light. In this photograph, beads are trapped in the injection port, with bubbles forming shortly after injection.

  5. Development of a numerical procedure for mixed mode K-solutions and fatigue crack growth in FCC single crystal superalloys

    NASA Astrophysics Data System (ADS)

    Ranjan, Srikant

    2005-11-01

    Fatigue-induced failures in aircraft gas turbine and rocket engine turbopump blades and vanes are a pervasive problem. Turbine blades and vanes represent perhaps the most demanding structural applications due to the combination of high operating temperature, corrosive environment, high monotonic and cyclic stresses, long expected component lifetimes and the enormous consequence of structural failure. Single crystal nickel-base superalloy turbine blades are being utilized in rocket engine turbopumps and jet engines because of their superior creep, stress rupture, melt resistance, and thermomechanical fatigue capabilities over polycrystalline alloys. These materials have orthotropic properties making the position of the crystal lattice relative to the part geometry a significant factor in the overall analysis. Computation of stress intensity factors (SIFs) and the ability to model fatigue crack growth rate at single crystal cracks subject to mixed-mode loading conditions are important parts of developing a mechanistically based life prediction for these complex alloys. A general numerical procedure has been developed to calculate SIFs for a crack in a general anisotropic linear elastic material subject to mixed-mode loading conditions, using three-dimensional finite element analysis (FEA). The procedure does not require an a priori assumption of plane stress or plane strain conditions. The SIFs KI, KII, and KIII are shown to be a complex function of the coupled 3D crack tip displacement field. A comprehensive study of variation of SIFs as a function of crystallographic orientation, crack length, and mode-mixity ratios is presented, based on the 3D elastic orthotropic finite element modeling of tensile and Brazilian Disc (BD) specimens in specific crystal orientations. Variation of SIF through the thickness of the specimens is also analyzed. The resolved shear stress intensity coefficient or effective SIF, Krss, can be computed as a function of crack tip SIFs and the resolved shear stress on primary slip planes. The maximum value of Krss and DeltaKrss was found to determine the crack growth direction and the fatigue crack growth rate respectively. The fatigue crack driving force parameter, DeltaK rss, forms an important multiaxial fatigue damage parameter that can be used to predict life in superalloy components.

  6. Covariate Selection for Multilevel Models with Missing Data

    PubMed Central

    Marino, Miguel; Buxton, Orfeu M.; Li, Yi

    2017-01-01

    Missing covariate data hampers variable selection in multilevel regression settings. Current variable selection techniques for multiply-imputed data commonly address missingness in the predictors through list-wise deletion and stepwise-selection methods which are problematic. Moreover, most variable selection methods are developed for independent linear regression models and do not accommodate multilevel mixed effects regression models with incomplete covariate data. We develop a novel methodology that is able to perform covariate selection across multiply-imputed data for multilevel random effects models when missing data is present. Specifically, we propose to stack the multiply-imputed data sets from a multiple imputation procedure and to apply a group variable selection procedure through group lasso regularization to assess the overall impact of each predictor on the outcome across the imputed data sets. Simulations confirm the advantageous performance of the proposed method compared with the competing methods. We applied the method to reanalyze the Healthy Directions-Small Business cancer prevention study, which evaluated a behavioral intervention program targeting multiple risk-related behaviors in a working-class, multi-ethnic population. PMID:28239457

  7. Property Control of (Perfluorinated Ionomer)/(Inorganic Oxide) Composites by Tailoring the Nanoscale Morphology

    DTIC Science & Technology

    1994-06-10

    RPeport PROPERTY CONTROL OF ( PERFLUORINATED IONOMER)/(INORGANIC OXIDE) COMPOSITES BY TAILORING THE NANOSCALE MORPHOLOGY Kenneth A. Mauritz and Robert...Concept ......................................... 45 B. [Si0 2 -TiO2 (mixed)]/Nafion Nanocomposites: Sorption of Pre-Mixed Alkoxides...Nanocomposites: Sorption of Pre- Mixed Alkoxides ......................................... 49 A. Experimental Procedure ............................. 49 B

  8. GWAS with longitudinal phenotypes: performance of approximate procedures

    PubMed Central

    Sikorska, Karolina; Montazeri, Nahid Mostafavi; Uitterlinden, André; Rivadeneira, Fernando; Eilers, Paul HC; Lesaffre, Emmanuel

    2015-01-01

    Analysis of genome-wide association studies with longitudinal data using standard procedures, such as linear mixed model (LMM) fitting, leads to discouragingly long computation times. There is a need to speed up the computations significantly. In our previous work (Sikorska et al: Fast linear mixed model computations for genome-wide association studies with longitudinal data. Stat Med 2012; 32.1: 165–180), we proposed the conditional two-step (CTS) approach as a fast method providing an approximation to the P-value for the longitudinal single-nucleotide polymorphism (SNP) effect. In the first step a reduced conditional LMM is fit, omitting all the SNP terms. In the second step, the estimated random slopes are regressed on SNPs. The CTS has been applied to the bone mineral density data from the Rotterdam Study and proved to work very well even in unbalanced situations. In another article (Sikorska et al: GWAS on your notebook: fast semi-parallel linear and logistic regression for genome-wide association studies. BMC Bioinformatics 2013; 14: 166), we suggested semi-parallel computations, greatly speeding up fitting many linear regressions. Combining CTS with fast linear regression reduces the computation time from several weeks to a few minutes on a single computer. Here, we explore further the properties of the CTS both analytically and by simulations. We investigate the performance of our proposal in comparison with a related but different approach, the two-step procedure. It is analytically shown that for the balanced case, under mild assumptions, the P-value provided by the CTS is the same as from the LMM. For unbalanced data and in realistic situations, simulations show that the CTS method does not inflate the type I error rate and implies only a minimal loss of power. PMID:25712081

  9. An evaluation of three statistical estimation methods for assessing health policy effects on prescription drug claims.

    PubMed

    Mittal, Manish; Harrison, Donald L; Thompson, David M; Miller, Michael J; Farmer, Kevin C; Ng, Yu-Tze

    2016-01-01

    While the choice of analytical approach affects study results and their interpretation, there is no consensus to guide the choice of statistical approaches to evaluate public health policy change. This study compared and contrasted three statistical estimation procedures in the assessment of a U.S. Food and Drug Administration (FDA) suicidality warning, communicated in January 2008 and implemented in May 2009, on antiepileptic drug (AED) prescription claims. Longitudinal designs were utilized to evaluate Oklahoma (U.S. State) Medicaid claim data from January 2006 through December 2009. The study included 9289 continuously eligible individuals with prevalent diagnoses of epilepsy and/or psychiatric disorder. Segmented regression models using three estimation procedures [i.e., generalized linear models (GLM), generalized estimation equations (GEE), and generalized linear mixed models (GLMM)] were used to estimate trends of AED prescription claims across three time periods: before (January 2006-January 2008); during (February 2008-May 2009); and after (June 2009-December 2009) the FDA warning. All three statistical procedures estimated an increasing trend (P < 0.0001) in AED prescription claims before the FDA warning period. No procedures detected a significant change in trend during (GLM: -30.0%, 99% CI: -60.0% to 10.0%; GEE: -20.0%, 99% CI: -70.0% to 30.0%; GLMM: -23.5%, 99% CI: -58.8% to 1.2%) and after (GLM: 50.0%, 99% CI: -70.0% to 160.0%; GEE: 80.0%, 99% CI: -20.0% to 200.0%; GLMM: 47.1%, 99% CI: -41.2% to 135.3%) the FDA warning when compared to pre-warning period. Although the three procedures provided consistent inferences, the GEE and GLMM approaches accounted appropriately for correlation. Further, marginal models estimated using GEE produced more robust and valid population-level estimations. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Cellular Biotechnology Operations Support Systems-Fluid Dynamics Investigation (CBOSS-FDI)

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Aboard the International Space Station (ISS), the Tissue Culture Module (TCM) is the stationary bioreactor vessel in which cell cultures grow. However, for the Cellular Biotechnology Operations Support Systems-Fluid Dynamics Investigation (CBOSS-FDI), color polystyrene beads are used to measure the effectiveness of various mixing procedures. Uniform mixing is a crucial component of CBOSS experiments involving the immune response of human lymphoid cell suspensions. In this picture, the beads are trapped in the injection port shortly after injection. Swirls of beads indicate, event to the naked eye, the contents of the TCM are not fully mixed. The beads are similar in size and density to human lymphoid cells. The goal is to develop procedures that are both convenient for the flight crew and are optimal in providing uniform and reproducible mixing of all components, including cells. The average bead density in a well mixed TCM will be uniform, with no bubbles, and it will be measured using the absorption of light

  11. Human salmonellosis: estimation of dose-illness from outbreak data.

    PubMed

    Bollaerts, Kaatje; Aerts, Marc; Faes, Christel; Grijspeerdt, Koen; Dewulf, Jeroen; Mintiens, Koen

    2008-04-01

    The quantification of the relationship between the amount of microbial organisms ingested and a specific outcome such as infection, illness, or mortality is a key aspect of quantitative risk assessment. A main problem in determining such dose-response models is the availability of appropriate data. Human feeding trials have been criticized because only young healthy volunteers are selected to participate and low doses, as often occurring in real life, are typically not considered. Epidemiological outbreak data are considered to be more valuable, but are more subject to data uncertainty. In this article, we model the dose-illness relationship based on data of 20 Salmonella outbreaks, as discussed by the World Health Organization. In particular, we model the dose-illness relationship using generalized linear mixed models and fractional polynomials of dose. The fractional polynomial models are modified to satisfy the properties of different types of dose-illness models as proposed by Teunis et al. Within these models, differences in host susceptibility (susceptible versus normal population) are modeled as fixed effects whereas differences in serovar type and food matrix are modeled as random effects. In addition, two bootstrap procedures are presented. A first procedure accounts for stochastic variability whereas a second procedure accounts for both stochastic variability and data uncertainty. The analyses indicate that the susceptible population has a higher probability of illness at low dose levels when the combination pathogen-food matrix is extremely virulent and at high dose levels when the combination is less virulent. Furthermore, the analyses suggest that immunity exists in the normal population but not in the susceptible population.

  12. A hybrid approach to modeling and control of vehicle height for electronically controlled air suspension

    NASA Astrophysics Data System (ADS)

    Sun, Xiaoqiang; Cai, Yingfeng; Wang, Shaohua; Liu, Yanling; Chen, Long

    2016-01-01

    The control problems associated with vehicle height adjustment of electronically controlled air suspension (ECAS) still pose theoretical challenges for researchers, which manifest themselves in the publications on this subject over the last years. This paper deals with modeling and control of a vehicle height adjustment system for ECAS, which is an example of a hybrid dynamical system due to the coexistence and coupling of continuous variables and discrete events. A mixed logical dynamical (MLD) modeling approach is chosen for capturing enough details of the vehicle height adjustment process. The hybrid dynamic model is constructed on the basis of some assumptions and piecewise linear approximation for components nonlinearities. Then, the on-off statuses of solenoid valves and the piecewise approximation process are described by propositional logic, and the hybrid system is transformed into the set of linear mixed-integer equalities and inequalities, denoted as MLD model, automatically by HYSDEL. Using this model, a hybrid model predictive controller (HMPC) is tuned based on online mixed-integer quadratic optimization (MIQP). Two different scenarios are considered in the simulation, whose results verify the height adjustment effectiveness of the proposed approach. Explicit solutions of the controller are computed to control the vehicle height adjustment system in realtime using an offline multi-parametric programming technology (MPT), thus convert the controller into an equivalent explicit piecewise affine form. Finally, bench experiments for vehicle height lifting, holding and lowering procedures are conducted, which demonstrate that the HMPC can adjust the vehicle height by controlling the on-off statuses of solenoid valves directly. This research proposes a new modeling and control method for vehicle height adjustment of ECAS, which leads to a closed-loop system with favorable dynamical properties.

  13. Mixed Stationary Liquid Phases for Gas-Liquid Chromatography.

    ERIC Educational Resources Information Center

    Koury, Albert M.; Parcher, Jon F.

    1979-01-01

    Describes a laboratory technique for use in an undergraduate instrumental analysis course that, using the interpretation of window diagrams, prepares a mixed liquid phase column for gas-liquid chromatography. A detailed procedure is provided. (BT)

  14. The interactive role of subsynoptic scale jet sreak and planetary boundary layer adjustments in organizing an apparently isolated convective complex

    NASA Technical Reports Server (NTRS)

    Kaplan, M. L.; Zack, J. W.; Wong, V. C.; Tuccillo, J. J.; Coats, G. D.

    1982-01-01

    A mesoscale atmospheric simulation system is described that is being developed in order to improve the simulation of subsynoptic and mesoscale adjustments associated with cyclogenesis, severe storm development, and significant atmospheric transport processes. Present emphasis in model development is in the parameterization of physical processes, time-dependent boundary conditions, sophisticated initialization and analysis procedures, nested grid solutions, and applications software development. Basic characteristics of the system as of March 1982 are listed. In a case study, the Grand Island tornado outbreak of 3 June 1980 is considered in substantial detail. Results of simulations with a mesoscale atmospheric simulation system indicate that over the high plains subtle interactions between existing jet streaks and deep well mixed boundary layers can lead to well organized patterns of mesoscale divergence and pressure falls. The amplitude and positioning of these mesoscale features is a function of the subtle nonlinear interaction between the pre-existing jet-streak and deep well mixed boundary layers. Model results for the case study indicate that the model has the potential for forecasting the precursor mesoscale convective environment.

  15. Computational models for the viscous/inviscid analysis of jet aircraft exhaust plumes

    NASA Astrophysics Data System (ADS)

    Dash, S. M.; Pergament, H. S.; Thorpe, R. D.

    1980-05-01

    Computational models which analyze viscous/inviscid flow processes in jet aircraft exhaust plumes are discussed. These models are component parts of an NASA-LaRC method for the prediction of nozzle afterbody drag. Inviscid/shock processes are analyzed by the SCIPAC code which is a compact version of a generalized shock capturing, inviscid plume code (SCIPPY). The SCIPAC code analyzes underexpanded jet exhaust gas mixtures with a self-contained thermodynamic package for hydrocarbon exhaust products and air. A detailed and automated treatment of the embedded subsonic zones behind Mach discs is provided in this analysis. Mixing processes along the plume interface are analyzed by two upgraded versions of an overlaid, turbulent mixing code (BOAT) developed previously for calculating nearfield jet entrainment. The BOATAC program is a frozen chemistry version of BOAT containing the aircraft thermodynamic package as SCIPAC; BOATAB is an afterburning version with a self-contained aircraft (hydrocarbon/air) finite-rate chemistry package. The coupling of viscous and inviscid flow processes is achieved by an overlaid procedure with interactive effects accounted for by a displacement thickness type correction to the inviscid plume interface.

  16. Computational models for the viscous/inviscid analysis of jet aircraft exhaust plumes. [predicting afterbody drag

    NASA Technical Reports Server (NTRS)

    Dash, S. M.; Pergament, H. S.; Thorpe, R. D.

    1980-01-01

    Computational models which analyze viscous/inviscid flow processes in jet aircraft exhaust plumes are discussed. These models are component parts of an NASA-LaRC method for the prediction of nozzle afterbody drag. Inviscid/shock processes are analyzed by the SCIPAC code which is a compact version of a generalized shock capturing, inviscid plume code (SCIPPY). The SCIPAC code analyzes underexpanded jet exhaust gas mixtures with a self-contained thermodynamic package for hydrocarbon exhaust products and air. A detailed and automated treatment of the embedded subsonic zones behind Mach discs is provided in this analysis. Mixing processes along the plume interface are analyzed by two upgraded versions of an overlaid, turbulent mixing code (BOAT) developed previously for calculating nearfield jet entrainment. The BOATAC program is a frozen chemistry version of BOAT containing the aircraft thermodynamic package as SCIPAC; BOATAB is an afterburning version with a self-contained aircraft (hydrocarbon/air) finite-rate chemistry package. The coupling of viscous and inviscid flow processes is achieved by an overlaid procedure with interactive effects accounted for by a displacement thickness type correction to the inviscid plume interface.

  17. Organizational justice and health: Studying mental preoccupation with work and social support as mediators for lagged and reversed relationships.

    PubMed

    Eib, Constanze; Bernhard-Oettel, Claudia; Magnusson Hanson, Linda L; Leineweber, Constanze

    2018-03-05

    Organizational justice perceptions are considered a predictor of health and well-being. To date, empirical evidence about whether organizational justice perceptions predict health or health predicts organizational justice perceptions is mixed. Furthermore, the processes underlying these relationships are largely unknown. In this article, we study whether bidirectional relationships can be explained by 2 different mediation mechanisms. First, based on the allostatic load model, we suggest that the relationships between organizational justice perceptions and different health indicators are mediated through mental preoccupation with work. Second, based on the affective perception and affective reaction assumption, we investigate if the relationships between different health indicators and organizational justice perceptions are mediated by social support at work. Using a large-scale Swedish panel study (N = 3,236), we test the bidirectional mediating relationships between procedural justice perceptions and self-rated health, depressive symptoms, and sickness absence with a cross-lagged design with 3 waves of data. Significant lagged effects from procedural justice to health were found for models predicting depressive symptoms and sickness absence. Mental preoccupation with work was not found to mediate the longitudinal relationship between procedural justice perceptions and indicators of health. Significant lagged effects from health indicators to procedural justice were found for models involving self-rated health, depressive symptoms, and sickness absence. Social support mediated the longitudinal relationships between all 3 health indicators and procedural justice. Results are discussed in light of previous studies and implications for theory and practice are outlined. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  18. Study of Thermodynamics of Liquid Noble-Metals Alloys Through a Pseudopotential Theory

    NASA Astrophysics Data System (ADS)

    Vora, Aditya M.

    2010-09-01

    The Gibbs-Bogoliubov (GB) inequality is applied to investigate the thermodynamic properties of some equiatomic noble metal alloys in liquid phase such as Au-Cu, Ag-Cu, and Ag-Au using well recognized pseudopotential formalism. For description of the structure, well known Percus-Yevick (PY) hard sphere model is used as a reference system. By applying a variation method the best hard core diameters have been found which correspond to minimum free energy. With this procedure the thermodynamic properties such as entropy and heat of mixing have been computed. The influence of local field correction function viz; Hartree (H), Taylor (T), Ichimaru-Utsumi (IU), Farid et al. (F), and Sarkar et al. (S) is also investigated. The computed results of the excess entropy compares favourably in the case of liquid alloys while the agreement with experiment is poor in the case of heats of mixing. This may be due to the sensitivity of the heats of mixing with the potential parameters and the dielectric function.

  19. Optimal design of mixed-media packet-switching networks - Routing and capacity assignment

    NASA Technical Reports Server (NTRS)

    Huynh, D.; Kuo, F. F.; Kobayashi, H.

    1977-01-01

    This paper considers a mixed-media packet-switched computer communication network which consists of a low-delay terrestrial store-and-forward subnet combined with a low-cost high-bandwidth satellite subnet. We show how to route traffic via ground and/or satellite links by means of static, deterministic procedures and assign capacities to channels subject to a given linear cost such that the network average delay is minimized. Two operational schemes for this network model are investigated: one is a scheme in which the satellite channel is used as a slotted ALOHA channel; the other is a new multiaccess scheme we propose in which whenever a channel collision occurs, retransmission of the involved packets will route through ground links to their destinations. The performance of both schemes is evaluated and compared in terms of cost and average packet delay tradeoffs for some examples. The results offer guidelines for the design and optimal utilization of mixed-media networks.

  20. A Modular Set of Mixed Reality Simulators for blind and Guided Procedures

    DTIC Science & Technology

    2015-08-01

    W81XWH-14-1-0113 – Year 1 Report University of Florida Page 1 of 12 AWARD NUMBER: W81XWH-14-1-0113 TITLE: A Modular Set of Mixed Reality...Simulators for “blind” and Guided Procedures PRINCIPAL INVESTIGATOR: Samsun Lampotang CONTRACTING ORGANIZATION: University of Florida Gainesville, FL...designated by other documentation. W81XWH-14-1-0113 – Year 1 Report University of Florida Page 2 of 12 REPORT DOCUMENTATION PAGE Form Approved OMB No

  1. Nested generalized linear mixed model with ordinal response: Simulation and application on poverty data in Java Island

    NASA Astrophysics Data System (ADS)

    Widyaningsih, Yekti; Saefuddin, Asep; Notodiputro, Khairil A.; Wigena, Aji H.

    2012-05-01

    The objective of this research is to build a nested generalized linear mixed model using an ordinal response variable with some covariates. There are three main jobs in this paper, i.e. parameters estimation procedure, simulation, and implementation of the model for the real data. At the part of parameters estimation procedure, concepts of threshold, nested random effect, and computational algorithm are described. The simulations data are built for 3 conditions to know the effect of different parameter values of random effect distributions. The last job is the implementation of the model for the data about poverty in 9 districts of Java Island. The districts are Kuningan, Karawang, and Majalengka chose randomly in West Java; Temanggung, Boyolali, and Cilacap from Central Java; and Blitar, Ngawi, and Jember from East Java. The covariates in this model are province, number of bad nutrition cases, number of farmer families, and number of health personnel. In this modeling, all covariates are grouped as ordinal scale. Unit observation in this research is sub-district (kecamatan) nested in district, and districts (kabupaten) are nested in province. For the result of simulation, ARB (Absolute Relative Bias) and RRMSE (Relative Root of mean square errors) scale is used. They show that prov parameters have the highest bias, but more stable RRMSE in all conditions. The simulation design needs to be improved by adding other condition, such as higher correlation between covariates. Furthermore, as the result of the model implementation for the data, only number of farmer family and number of medical personnel have significant contributions to the level of poverty in Central Java and East Java province, and only district 2 (Karawang) of province 1 (West Java) has different random effect from the others. The source of the data is PODES (Potensi Desa) 2008 from BPS (Badan Pusat Statistik).

  2. The Influence of Study-Level Inference Models and Study Set Size on Coordinate-Based fMRI Meta-Analyses

    PubMed Central

    Bossier, Han; Seurinck, Ruth; Kühn, Simone; Banaschewski, Tobias; Barker, Gareth J.; Bokde, Arun L. W.; Martinot, Jean-Luc; Lemaitre, Herve; Paus, Tomáš; Millenet, Sabina; Moerkerke, Beatrijs

    2018-01-01

    Given the increasing amount of neuroimaging studies, there is a growing need to summarize published results. Coordinate-based meta-analyses use the locations of statistically significant local maxima with possibly the associated effect sizes to aggregate studies. In this paper, we investigate the influence of key characteristics of a coordinate-based meta-analysis on (1) the balance between false and true positives and (2) the activation reliability of the outcome from a coordinate-based meta-analysis. More particularly, we consider the influence of the chosen group level model at the study level [fixed effects, ordinary least squares (OLS), or mixed effects models], the type of coordinate-based meta-analysis [Activation Likelihood Estimation (ALE) that only uses peak locations, fixed effects, and random effects meta-analysis that take into account both peak location and height] and the amount of studies included in the analysis (from 10 to 35). To do this, we apply a resampling scheme on a large dataset (N = 1,400) to create a test condition and compare this with an independent evaluation condition. The test condition corresponds to subsampling participants into studies and combine these using meta-analyses. The evaluation condition corresponds to a high-powered group analysis. We observe the best performance when using mixed effects models in individual studies combined with a random effects meta-analysis. Moreover the performance increases with the number of studies included in the meta-analysis. When peak height is not taken into consideration, we show that the popular ALE procedure is a good alternative in terms of the balance between type I and II errors. However, it requires more studies compared to other procedures in terms of activation reliability. Finally, we discuss the differences, interpretations, and limitations of our results. PMID:29403344

  3. A procedure to estimate proximate analysis of mixed organic wastes.

    PubMed

    Zaher, U; Buffiere, P; Steyer, J P; Chen, S

    2009-04-01

    In waste materials, proximate analysis measuring the total concentration of carbohydrate, protein, and lipid contents from solid wastes is challenging, as a result of the heterogeneous and solid nature of wastes. This paper presents a new procedure that was developed to estimate such complex chemical composition of the waste using conventional practical measurements, such as chemical oxygen demand (COD) and total organic carbon. The procedure is based on mass balance of macronutrient elements (carbon, hydrogen, nitrogen, oxygen, and phosphorus [CHNOP]) (i.e., elemental continuity), in addition to the balance of COD and charge intensity that are applied in mathematical modeling of biological processes. Knowing the composition of such a complex substrate is crucial to study solid waste anaerobic degradation. The procedure was formulated to generate the detailed input required for the International Water Association (London, United Kingdom) Anaerobic Digestion Model number 1 (IWA-ADM1). The complex particulate composition estimated by the procedure was validated with several types of food wastes and animal manures. To make proximate analysis feasible for validation, the wastes were classified into 19 types to allow accurate extraction and proximate analysis. The estimated carbohydrates, proteins, lipids, and inerts concentrations were highly correlated to the proximate analysis; correlation coefficients were 0.94, 0.88, 0.99, and 0.96, respectively. For most of the wastes, carbohydrate was the highest fraction and was estimated accurately by the procedure over an extended range with high linearity. For wastes that are rich in protein and fiber, the procedure was even more consistent compared with the proximate analysis. The new procedure can be used for waste characterization in solid waste treatment design and optimization.

  4. Using Mixed-Mode Contacts in Client Surveys: Getting More Bang for Your Buck

    ERIC Educational Resources Information Center

    Israel, Glenn D.

    2013-01-01

    Surveys are commonly used in Extension to identify client needs or evaluate program outcomes. This article examines how available email addresses can be incorporated into mixed-mode procedures for surveys. When mail and email addresses are used to implement a sequence of email and postal invitations in a mixed-mode survey, response rates were…

  5. Improved silicon carbide for advanced heat engines

    NASA Technical Reports Server (NTRS)

    Whalen, T. J.; Winterbottom, W. L.

    1986-01-01

    Work performed to develop silicon carbide materials of high strength and to form components of complex shape and high reliability is described. A beta-SiC powder and binder system was adapted to the injection molding process and procedures and process parameters developed capable of providing a sintered silicon carbide material with improved properties. The initial effort has been to characterize the baseline precursor materials (beta silicon carbide powder and boron and carbon sintering aids), develop mixing and injection molding procedures for fabricating test bars, and characterize the properties of the sintered materials. Parallel studies of various mixing, dewaxing, and sintering procedures have been carried out in order to distinguish process routes for improving material properties. A total of 276 MOR bars of the baseline material have been molded, and 122 bars have been fully processed to a sinter density of approximately 95 percent. The material has a mean MOR room temperature strength of 43.31 ksi (299 MPa), a Weibull characteristic strength of 45.8 ksi (315 MPa), and a Weibull modulus of 8.0. Mean values of the MOR strengths at 1000, 1200, and 14000 C are 41.4, 43.2, and 47.2 ksi, respectively. Strength controlling flaws in this material were found to consist of regions of high porosity and were attributed to agglomerates originating in the initial mixing procedures. The mean stress rupture lift at 1400 C of five samples tested at 172 MPa (25 ksi) stress was 62 hours and at 207 MPa (30 ksi) stress was 14 hours. New fluid mixing techniques have been developed which significantly reduce flaw size and improve the strength of the material. Initial MOR tests indicate the strength of the fluid-mixed material exceeds the baseline property by more than 33 percent.

  6. Anomalies of the upper water column in the Mediterranean Sea

    NASA Astrophysics Data System (ADS)

    Rivetti, Irene; Boero, Ferdinando; Fraschetti, Simonetta; Zambianchi, Enrico; Lionello, Piero

    2017-04-01

    The evolution of the upper water column in the Mediterranean Sea during more than 60 years is reconstructed in terms of few parameters describing the mixed layer and the seasonal thermocline. The analysis covers the period 1945-2011 using data from three public sources: MEDAR-MEDATLAS, World Ocean Database, MFS-VOS program. Five procedures for estimating the mixed layer depth are described, discussed and compared using the 20-year long time series of temperature profiles of the DYFAMED station in the Ligurian Sea. On this basis the so-called three segments profile model (which approximates the upper water column with three segments representing mixed layer, thermocline and deep layer) has been selected for a systematic analysis at Mediterranean scale. A widespread increase of the thickness and temperature of the mixed layer, increase of the depth and decrease of the temperature of the thermocline base have been observed in summer and autumn during the recent decades. It is shown that positive temperature extremes of the mixed layer and of its thickness are potential drivers of the mass mortalities of benthic invertebrates documented since 1983. Hotspots of mixed layer anomalies have been also identified. These results refine previous analyses showing that ongoing and future warming of upper Mediterranean is likely to increase mass mortalities by producing environmental conditions beyond the limit of tolerance of some benthic species.

  7. CFD Modeling of a Laser-Induced Ethane Pyrolysis in a Wall-less Reactor

    NASA Astrophysics Data System (ADS)

    Stadnichenko, Olga; Snytnikov, Valeriy; Yang, Junfeng; Matar, Omar

    2014-11-01

    Ethylene, as the most important feedstock, is widely used in chemical industry to produce various rubbers, plastics and synthetics. A recent study found the IR-laser irradiation induced ethane pyrolysis yields 25% higher ethylene production rates compared to the conventional steam cracking method. Laser induced pyrolysis is initiated by the generation of radicals upon heating of the ethane, then, followed by ethane/ethylene autocatalytic reaction in which ethane is converted into ethylene and other light hydrocarbons. This procedure is governed by micro-mixing of reactants and the feedstock residence time in reactor. Under mild turbulent conditions, the turbulence enhances the micro-mixing process and allows a high yield of ethylene. On the other hand, the high flow rate only allows a short residence time in the reactor which causes incomplete pyrolysis. This work attempts to investigate the interaction between turbulence and ethane pyrolysis process using large eddy simulation method. The modelling results could be applied to optimize the reactor design and operating conditions. Skolkovo Foundation through the UNIHEAT Project.

  8. CFD analyses of combustor and nozzle flowfields

    NASA Astrophysics Data System (ADS)

    Tsuei, Hsin-Hua; Merkle, Charles L.

    1993-11-01

    The objectives of the research are to improve design capabilities for low thrust rocket engines through understanding of the detailed mixing and combustion processes. A Computational Fluid Dynamic (CFD) technique is employed to model the flowfields within the combustor, nozzle, and near plume field. The computational modeling of the rocket engine flowfields requires the application of the complete Navier-Stokes equations, coupled with species diffusion equations. Of particular interest is a small gaseous hydrogen-oxygen thruster which is considered as a coordinated part of an ongoing experimental program at NASA LeRC. The numerical procedure is performed on both time-marching and time-accurate algorithms, using an LU approximate factorization in time, flux split upwinding differencing in space. The integrity of fuel film cooling along the wall, its effectiveness in the mixing with the core flow including unsteady large scale effects, the resultant impact on performance and the assessment of the near plume flow expansion to finite pressure altitude chamber are addressed.

  9. Cellular Biotechnology Operations Support Systems-Fluid Dynamics Investigation (CBOSS-FDI)

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Aboard the International Space Station (ISS), the Tissue Culture Module (TCM) is the stationary bioreactor vessel in which cell cultures grow. However, for the Cellular Biotechnology Operations Support Systems-Fluid Dynamics Investigation (CBOSS-FDI), color polystyrene beads are used to measure the effectiveness of various mixing procedures. The beads are similar in size and density to human lymphoid cells. Uniform mixing is a crucial component of CBOSS experiments involving the immune response of human lymphoid cell suspensions. The goal is to develop procedures that are both convenient for the flight crew and are optimal in providing uniform and reproducible mixing of all components, including cells. The average bead density in a well mixed TCM will be uniform, with no bubbles, and it will be measured using the absorption of light. In this photograph, beads are trapped in the injection port, with bubbles forming shortly after injection.

  10. Engineering calculations for solving the orbital allotment problem

    NASA Technical Reports Server (NTRS)

    Reilly, C.; Walton, E. K.; Mount-Campbell, C.; Caldecott, R.; Aebker, E.; Mata, F.

    1988-01-01

    Four approaches for calculating downlink interferences for shaped-beam antennas are described. An investigation of alternative mixed-integer programming models for satellite synthesis is summarized. Plans for coordinating the various programs developed under this grant are outlined. Two procedures for ordering satellites to initialize the k-permutation algorithm are proposed. Results are presented for the k-permutation algorithms. Feasible solutions are found for 5 of the 6 problems considered. Finally, it is demonstrated that the k-permutation algorithm can be used to solve arc allotment problems.

  11. Comparison of Optimum Interpolation and Cressman Analyses

    NASA Technical Reports Server (NTRS)

    Baker, W. E.; Bloom, S. C.; Nestler, M. S.

    1984-01-01

    The objective of this investigation is to develop a state-of-the-art optimum interpolation (O/I) objective analysis procedure for use in numerical weather prediction studies. A three-dimensional multivariate O/I analysis scheme has been developed. Some characteristics of the GLAS O/I compared with those of the NMC and ECMWF systems are summarized. Some recent enhancements of the GLAS scheme include a univariate analysis of water vapor mixing ratio, a geographically dependent model prediction error correlation function and a multivariate oceanic surface analysis.

  12. Comparison of Optimum Interpolation and Cressman Analyses

    NASA Technical Reports Server (NTRS)

    Baker, W. E.; Bloom, S. C.; Nestler, M. S.

    1985-01-01

    The development of a state of the art optimum interpolation (O/I) objective analysis procedure for use in numerical weather prediction studies was investigated. A three dimensional multivariate O/I analysis scheme was developed. Some characteristics of the GLAS O/I compared with those of the NMC and ECMWF systems are summarized. Some recent enhancements of the GLAS scheme include a univariate analysis of water vapor mixing ratio, a geographically dependent model prediction error correlation function and a multivariate oceanic surface analysis.

  13. Automated Neuropsychological Assessment Metrics Version 4 (ANAM4): Select Psychometric Properties and Administration Procedures

    DTIC Science & Technology

    2012-12-01

    month no- cost extension for this study was approved on 7 November 2012, extending study activities through December 2013. A modified statement of work...approved as part of the no- cost extension and currently pending approval by the USARIEM HURC (Amendment #14), is presented in Table 2. 5...ptjournal.apta.org/Downloaded from 13     time. SAS 9.2 was used to perform a mixed model analysis with random individual intercept to account for

  14. Mixture modeling of multi-component data sets with application to ion-probe zircon ages

    NASA Astrophysics Data System (ADS)

    Sambridge, M. S.; Compston, W.

    1994-12-01

    A method is presented for detecting multiple components in a population of analytical observations for zircon and other ages. The procedure uses an approach known as mixture modeling, in order to estimate the most likely ages, proportions and number of distinct components in a given data set. Particular attention is paid to estimating errors in the estimated ages and proportions. At each stage of the procedure several alternative numerical approaches are suggested, each having their own advantages in terms of efficency and accuracy. The methodology is tested on synthetic data sets simulating two or more mixed populations of zircon ages. In this case true ages and proportions of each population are known and compare well with the results of the new procedure. Two examples are presented of its use with sets of SHRIMP U-238 - Pb-206 zircon ages from Palaeozoic rocks. A published data set for altered zircons from bentonite at Meishucun, South China, previously treated as a single-component population after screening for gross alteration effects, can be resolved into two components by the new procedure and their ages, proportions and standard errors estimated. The older component, at 530 +/- 5 Ma (2 sigma), is our best current estimate for the age of the bentonite. Mixture modeling of a data set for unaltered zircons from a tonalite elsewhere defines the magmatic U-238 - Pb-206 age at high precision (2 sigma +/- 1.5 Ma), but one-quarter of the 41 analyses detect hidden and significantly older cores.

  15. Behavior of some singly ionized, heavy-ion impurities during compression in a theta-pinch plasma

    NASA Technical Reports Server (NTRS)

    Jalufka, N. W.

    1975-01-01

    The introduction of a small percentage of an impurity gas containing a desired element into a theta-pinch plasma is a standard procedure used to investigate the spectra and atomic processes of the element. This procedure assumes that the mixing ratio of impurity-to-fill gases remains constant during the collapse and heating phase. Spectroscopic investigations of the constant-mixing-ratio assumption for a 2% neon and argon impurity verifies the assumption only for the neon impurity. However, for the 2% argon impurity, only 20 to 25% of the argon is in the high-temperature compressed plasma. It is concluded that the constant-mixing-ratio assumption is not applicable to the argon impurity.

  16. Fuel Injector Design Optimization for an Annular Scramjet Geometry

    NASA Technical Reports Server (NTRS)

    Steffen, Christopher J., Jr.

    2003-01-01

    A four-parameter, three-level, central composite experiment design has been used to optimize the configuration of an annular scramjet injector geometry using computational fluid dynamics. The computational fluid dynamic solutions played the role of computer experiments, and response surface methodology was used to capture the simulation results for mixing efficiency and total pressure recovery within the scramjet flowpath. An optimization procedure, based upon the response surface results of mixing efficiency, was used to compare the optimal design configuration against the target efficiency value of 92.5%. The results of three different optimization procedures are presented and all point to the need to look outside the current design space for different injector geometries that can meet or exceed the stated mixing efficiency target.

  17. Evaluation of soil modification mixing procedures

    DOT National Transportation Integrated Search

    2001-01-01

    Lime is routinely used as a soil modification agent in Kansas to improve the performance of subgrade soils with the primary goal of reducing volume change. Effective mixing of lime and soil is critical to ensuring that the expected improvements occur...

  18. Virginia method for the design of dense-graded emulsion mixes.

    DOT National Transportation Integrated Search

    1982-01-01

    An investigation into the Illinois method for the design of dense-graded emulsion base mixes had resulted in a report offering several modifications to that procedure. The Bituminous Research Advisory Committee then recommended that the Illinois meth...

  19. Pressurized thermal shock: TEMPEST computer code simulation of thermal mixing in the cold leg and downcomer of a pressurized water reactor. [Creare 61 and 64

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eyler, L.L.; Trent, D.S.

    The TEMPEST computer program was used to simulate fluid and thermal mixing in the cold leg and downcomer of a pressurized water reactor under emergency core cooling high-pressure injection (HPI), which is of concern to the pressurized thermal shock (PTS) problem. Application of the code was made in performing an analysis simulation of a full-scale Westinghouse three-loop plant design cold leg and downcomer. Verification/assessment of the code was performed and analysis procedures developed using data from Creare 1/5-scale experimental tests. Results of three simulations are presented. The first is a no-loop-flow case with high-velocity, low-negative-buoyancy HPI in a 1/5-scale modelmore » of a cold leg and downcomer. The second is a no-loop-flow case with low-velocity, high-negative density (modeled with salt water) injection in a 1/5-scale model. Comparison of TEMPEST code predictions with experimental data for these two cases show good agreement. The third simulation is a three-dimensional model of one loop of a full size Westinghouse three-loop plant design. Included in this latter simulation are loop components extending from the steam generator to the reactor vessel and a one-third sector of the vessel downcomer and lower plenum. No data were available for this case. For the Westinghouse plant simulation, thermally coupled conduction heat transfer in structural materials is included. The cold leg pipe and fluid mixing volumes of the primary pump, the stillwell, and the riser to the steam generator are included in the model. In the reactor vessel, the thermal shield, pressure vessel cladding, and pressure vessel wall are thermally coupled to the fluid and thermal mixing in the downcomer. The inlet plenum mixing volume is included in the model. A 10-min (real time) transient beginning at the initiation of HPI is computed to determine temperatures at the beltline of the pressure vessel wall.« less

  20. Guidelines and Parameter Selection for the Simulation of Progressive Delamination

    NASA Technical Reports Server (NTRS)

    Song, Kyongchan; Davila, Carlos G.; Rose, Cheryl A.

    2008-01-01

    Turon s methodology for determining optimal analysis parameters for the simulation of progressive delamination is reviewed. Recommended procedures for determining analysis parameters for efficient delamination growth predictions using the Abaqus/Standard cohesive element and relatively coarse meshes are provided for single and mixed-mode loading. The Abaqus cohesive element, COH3D8, and a user-defined cohesive element are used to develop finite element models of the double cantilever beam specimen, the end-notched flexure specimen, and the mixed-mode bending specimen to simulate progressive delamination growth in Mode I, Mode II, and mixed-mode fracture, respectively. The predicted responses are compared with their analytical solutions. The results show that for single-mode fracture, the predicted responses obtained with the Abaqus cohesive element correlate well with the analytical solutions. For mixed-mode fracture, it was found that the response predicted using COH3D8 elements depends on the damage evolution criterion that is used. The energy-based criterion overpredicts the peak loads and load-deflection response. The results predicted using a tabulated form of the BK criterion correlate well with the analytical solution and with the results predicted with the user-written element.

  1. Mixed time integration methods for transient thermal analysis of structures, appendix 5

    NASA Technical Reports Server (NTRS)

    Liu, W. K.

    1982-01-01

    Mixed time integration methods for transient thermal analysis of structures are studied. An efficient solution procedure for predicting the thermal behavior of aerospace vehicle structures was developed. A 2D finite element computer program incorporating these methodologies is being implemented. The performance of these mixed time finite element algorithms can then be evaluated employing the proposed example problem.

  2. The Influence of Elective Surgery on Functional Health in Veterans with PTSD

    DTIC Science & Technology

    2012-12-21

    elective surgery in veterans with PTSD. Design : A longitudinal, mixed method, quasi-experimental, nonequivalent control group study was conducted...longitudinal, mixed method, quasi-experimental, nonequivalent control group study was conducted. Methods and Sample: Physical and mental health...Methods Procedures." In Research design : qualitative, quantitative , and mixed methods approaches, 203-226. Thousand Oaks, Calif.: Sage Publications

  3. Performance of fast-setting impression materials in the reproduction of subgingival tooth surfaces without soft tissue retraction.

    PubMed

    Rudolph, Heike; Röhl, Andreas; Walter, Michael H; Luthardt, Ralph G; Quaas, Sebastian

    2014-01-01

    Fast-setting impression materials may be prone to inaccuracies due to accidental divergence from the recommended mixing protocol. This prospective randomized clinical trial aimed to assess three-dimensional (3D) deviations in the reproduction of subgingival tooth surfaces and to determine the effect of either following or purposely diverging from the recommended mixing procedure for a fast-setting addition-curing silicone (AS) and fast-setting polyether (PE). After three impressions each were taken from 96 participants, sawcut gypsum casts were fabricated with a standardized procedure and then optically digitized. Data were assessed with a computer-aided 3D analysis. For AS impressions, multivariate analysis of variance revealed a significant influence of the individual tooth and the degree to which the recommended mixing protocol was violated. For PE impressions, the ambient air temperature and individual tooth showed significant effects, while divergence from the recommended mixing protocol was not of significance. The fast-setting PE material was not affected by changes in the recommended mixing protocol. For the two fast-setting materials examined, no divergences from the recommended mixing protocol of less than 2 minutes led to failures in the reproduction of the subgingival tooth surfaces.

  4. 40 CFR 60.93 - Test methods and procedures.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 6 2010-07-01 2010-07-01 false Test methods and procedures. 60.93... (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Hot Mix Asphalt Facilities § 60.93 Test methods and procedures. (a) In conducting the performance tests required in § 60.8...

  5. [Review on HSPF model for simulation of hydrology and water quality processes].

    PubMed

    Li, Zhao-fu; Liu, Hong-Yu; Li, Yan

    2012-07-01

    Hydrological Simulation Program-FORTRAN (HSPF), written in FORTRAN, is one ol the best semi-distributed hydrology and water quality models, which was first developed based on the Stanford Watershed Model. Many studies on HSPF model application were conducted. It can represent the contributions of sediment, nutrients, pesticides, conservatives and fecal coliforms from agricultural areas, continuously simulate water quantity and quality processes, as well as the effects of climate change and land use change on water quantity and quality. HSPF consists of three basic application components: PERLND (Pervious Land Segment) IMPLND (Impervious Land Segment), and RCHRES (free-flowing reach or mixed reservoirs). In general, HSPF has extensive application in the modeling of hydrology or water quality processes and the analysis of climate change and land use change. However, it has limited use in China. The main problems with HSPF include: (1) some algorithms and procedures still need to revise, (2) due to the high standard for input data, the accuracy of the model is limited by spatial and attribute data, (3) the model is only applicable for the simulation of well-mixed rivers, reservoirs and one-dimensional water bodies, it must be integrated with other models to solve more complex problems. At present, studies on HSPF model development are still undergoing, such as revision of model platform, extension of model function, method development for model calibration, and analysis of parameter sensitivity. With the accumulation of basic data and imorovement of data sharing, the HSPF model will be applied more extensively in China.

  6. Development of laboratory mix design procedures for RAP mixes.

    DOT National Transportation Integrated Search

    2013-12-01

    The objective of this study was to evaluate the amount of blending that occurs between RAP and virgin asphalt : binders in plant produced HMA in which RAP is incorporated. This objective was accomplished by testing plant : produced mixture from three...

  7. Investigation of asphalt content design for open-graded bituminous mixes.

    DOT National Transportation Integrated Search

    1974-01-01

    Several design procedures associated with determining the proper asphalt content for open-graded bituminous mixes were investigated. Also considered was the proper amount of tack coat that should be placed on the old surface prior to paving operation...

  8. Alternative acceptance procedures for asphalt mixes.

    DOT National Transportation Integrated Search

    1992-01-01

    By the year 2005, the use of chlorinated solvents is to be eliminated. Faced with this eventuality, VDOT formed a task force to look at alternatives for the acceptance of asphalt mixes. One alternative is to use biodegradable solvents in the extracti...

  9. 29 CFR 1926.1086 - Mixed-gas diving.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Regulations Relating to Labor (Continued) OCCUPATIONAL SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF LABOR (CONTINUED) SAFETY AND HEALTH REGULATIONS FOR CONSTRUCTION Diving Specific Operations Procedures § 1926.1086 Mixed-gas diving. Note: The requirements applicable to construction work under this section are...

  10. Immunopathological and antimicrobial effect of black pepper, ginger and thyme extracts on experimental model of acute hematogenous pyelonephritis in albino rats.

    PubMed

    Nassan, M A; Mohamed, E H

    2014-01-01

    Recent studies showed prominent antimicrobial activity of various plant extracts on certain pathogenic microorganisms, therefore we prepared crude aqueous extracts of black pepper, ginger and thyme and carried out an in vitro study by measuring antimicrobial activity of these extracts using the agar well diffusion method. An in vivo study was carried out on 50 adult healthy male albino rats which were divided into 5 groups, 10 rats each. Group 1: negative control group which received saline solution intragastrically daily; Group 2: Positive control group, injected with mixed bacterial suspension of S.aureus and E.coli as a model of pyelonephritis, then received saline solution intragastrically daily; Group 3: injected with the same dose of mixed bacterial suspension, then received 100 mg/kg/day black pepper extract intragastrically; Group 4: injected with mixed bacterial suspension then received 500 mg/kg/day ginger extract intragastrically. Group 5: injected with mixed bacterial suspension then received 500 mg/kg/day thyme extract intragastrically. All groups were sacrificed after either 1 or 4 weeks. Serum and blood samples were collected for lysozyme activity estimation using agarose lysoplate, measurement of nitric oxide production, and lymphocyte transformation test as well as for counting both total and differential leukocytes and erythrocytes. Kidney samples were tested histopathologically. Both in vivo and in vitro results confirm the efficacy of these extracts as natural antimicrobials and suggest the possibility of using them in treatment procedures.

  11. LVC interaction within a mixed-reality training system

    NASA Astrophysics Data System (ADS)

    Pollock, Brice; Winer, Eliot; Gilbert, Stephen; de la Cruz, Julio

    2012-03-01

    The United States military is increasingly pursuing advanced live, virtual, and constructive (LVC) training systems for reduced cost, greater training flexibility, and decreased training times. Combining the advantages of realistic training environments and virtual worlds, mixed reality LVC training systems can enable live and virtual trainee interaction as if co-located. However, LVC interaction in these systems often requires constructing immersive environments, developing hardware for live-virtual interaction, tracking in occluded environments, and an architecture that supports real-time transfer of entity information across many systems. This paper discusses a system that overcomes these challenges to empower LVC interaction in a reconfigurable, mixed reality environment. This system was developed and tested in an immersive, reconfigurable, and mixed reality LVC training system for the dismounted warfighter at ISU, known as the Veldt, to overcome LVC interaction challenges and as a test bed for cuttingedge technology to meet future U.S. Army battlefield requirements. Trainees interact physically in the Veldt and virtually through commercial and developed game engines. Evaluation involving military trained personnel found this system to be effective, immersive, and useful for developing the critical decision-making skills necessary for the battlefield. Procedural terrain modeling, model-matching database techniques, and a central communication server process all live and virtual entity data from system components to create a cohesive virtual world across all distributed simulators and game engines in real-time. This system achieves rare LVC interaction within multiple physical and virtual immersive environments for training in real-time across many distributed systems.

  12. Application of optimization technique for flood damage modeling in river system

    NASA Astrophysics Data System (ADS)

    Barman, Sangita Deb; Choudhury, Parthasarathi

    2018-04-01

    A river system is defined as a network of channels that drains different parts of a basin uniting downstream to form a common outflow. An application of various models found in literatures, to a river system having multiple upstream flows is not always straight forward, involves a lengthy procedure; and with non-availability of data sets model calibration and applications may become difficult. In the case of a river system the flow modeling can be simplified to a large extent if the channel network is replaced by an equivalent single channel. In the present work optimization model formulations based on equivalent flow and applications of the mixed integer programming based pre-emptive goal programming model in evaluating flood control alternatives for a real life river system in India are proposed to be covered in the study.

  13. Dynamic stresses in a Francis model turbine at deep part load

    NASA Astrophysics Data System (ADS)

    Weber, Wilhelm; von Locquenghien, Florian; Conrad, Philipp; Koutnik, Jiri

    2017-04-01

    A comparison between numerically obtained dynamic stresses in a Francis model turbine at deep part load with experimental ones is presented. Due to the change in the electrical power mix to more content of new renewable energy sources, Francis turbines are forced to operate at deep part load in order to compensate stochastic nature of wind and solar power and to ensure grid stability. For the extension of the operating range towards deep part load improved understanding of the harsh flow conditions and their impact on material fatigue of hydraulic components is required in order to ensure long life time of the power unit. In this paper pressure loads on a model turbine runner from unsteady two-phase computational fluid dynamics simulation at deep part load are used for calculation of mechanical stresses by finite element analysis. Therewith, stress distribution over time is determined. Since only few runner rotations are simulated due to enormous numerical cost, more effort has to be spent to evaluation procedure in order to obtain objective results. By comparing the numerical results with measured strains accuracy of the whole simulation procedure is verified.

  14. Prevalence, Risk Factors and Consequent Effect of Dystocia in Holstein Dairy Cows in Iran

    PubMed Central

    Atashi, Hadi; Abdolmohammadi, Alireza; Dadpasand, Mohammad; Asaadi, Anise

    2012-01-01

    The objective of this research was to determine the prevalence, risk factors and consequent effect of dystocia on lactation performance in Holstein dairy cows in Iran. The data set consisted of 55,577 calving records on 30,879 Holstein cows in 30 dairy herds for the period March 2000 to April 2009. Factors affecting dystocia were analyzed using multivariable logistic regression models through the maximum likelihood method in the GENMOD procedure. The effect of dystocia on lactation performance and factors affecting calf birth weight were analyzed using mixed linear model in the MIXED procedure. The average incidence of dystocia was 10.8% and the mean (SD) calf birth weight was 42.13 (5.42) kg. Primiparous cows had calves with lower body weight and were more likely to require assistance at parturition (p<0.05). Female calves had lower body weight, and had a lower odds ratio for dystocia than male calves (p<0.05). Twins had lower birth weight, and had a higher odds ratio for dystocia than singletons (p<0.05). Cows which gave birth to a calf with higher weight at birth experienced more calving difficulty (OR (95% CI) = 1.1(1.08–1.11). Total 305-d milk, fat and protein yield was 135 (23), 3.16 (0.80) and 6.52 (1.01) kg less, in cows that experienced dystocia at calving compared with those that did not (p<0.05). PMID:25049584

  15. Regression analysis of mixed recurrent-event and panel-count data

    PubMed Central

    Zhu, Liang; Tong, Xinwei; Sun, Jianguo; Chen, Manhua; Srivastava, Deo Kumar; Leisenring, Wendy; Robison, Leslie L.

    2014-01-01

    In event history studies concerning recurrent events, two types of data have been extensively discussed. One is recurrent-event data (Cook and Lawless, 2007. The Analysis of Recurrent Event Data. New York: Springer), and the other is panel-count data (Zhao and others, 2010. Nonparametric inference based on panel-count data. Test 20, 1–42). In the former case, all study subjects are monitored continuously; thus, complete information is available for the underlying recurrent-event processes of interest. In the latter case, study subjects are monitored periodically; thus, only incomplete information is available for the processes of interest. In reality, however, a third type of data could occur in which some study subjects are monitored continuously, but others are monitored periodically. When this occurs, we have mixed recurrent-event and panel-count data. This paper discusses regression analysis of such mixed data and presents two estimation procedures for the problem. One is a maximum likelihood estimation procedure, and the other is an estimating equation procedure. The asymptotic properties of both resulting estimators of regression parameters are established. Also, the methods are applied to a set of mixed recurrent-event and panel-count data that arose from a Childhood Cancer Survivor Study and motivated this investigation. PMID:24648408

  16. APT-ALF testing at LTRC in Louisiana : summary report.

    DOT National Transportation Integrated Search

    2012-09-01

    As an integral part of Study 0-6132, the primary objectives of this task included the following: : 1) To validate the new generation mix-design procedure (balanced) under APT testing using the ALF machine. : 2) To compare the HMA mix performance desi...

  17. Testing and Evaluation of Large Stone Mixes Using Marshall Mix Design Procedures

    DOT National Transportation Integrated Search

    1989-11-01

    Premature rutting of heavy duty asphalt pavements has been increasingly experienced in recent years primarily due to high pressure truck tires and increased wheel loads. Many asphalt technologists believe that the use of large size stone (maximum siz...

  18. Development and application of a statistical quality assessment method for dense-graded mixes.

    DOT National Transportation Integrated Search

    2004-08-01

    This report describes the development of the statistical quality assessment method and the procedure for mapping the measures obtained from the quality assessment method to a composite pay factor. The application to dense-graded mixes is demonstrated...

  19. Mixed-effects varying-coefficient model with skewed distribution coupled with cause-specific varying-coefficient hazard model with random-effects for longitudinal-competing risks data analysis.

    PubMed

    Lu, Tao; Wang, Min; Liu, Guangying; Dong, Guang-Hui; Qian, Feng

    2016-01-01

    It is well known that there is strong relationship between HIV viral load and CD4 cell counts in AIDS studies. However, the relationship between them changes during the course of treatment and may vary among individuals. During treatments, some individuals may experience terminal events such as death. Because the terminal event may be related to the individual's viral load measurements, the terminal mechanism is non-ignorable. Furthermore, there exists competing risks from multiple types of events, such as AIDS-related death and other death. Most joint models for the analysis of longitudinal-survival data developed in literatures have focused on constant coefficients and assume symmetric distribution for the endpoints, which does not meet the needs for investigating the nature of varying relationship between HIV viral load and CD4 cell counts in practice. We develop a mixed-effects varying-coefficient model with skewed distribution coupled with cause-specific varying-coefficient hazard model with random-effects to deal with varying relationship between the two endpoints for longitudinal-competing risks survival data. A fully Bayesian inference procedure is established to estimate parameters in the joint model. The proposed method is applied to a multicenter AIDS cohort study. Various scenarios-based potential models that account for partial data features are compared. Some interesting findings are presented.

  20. Development of Efficient Real-Fluid Model in Simulating Liquid Rocket Injector Flows

    NASA Technical Reports Server (NTRS)

    Cheng, Gary; Farmer, Richard

    2003-01-01

    The characteristics of propellant mixing near the injector have a profound effect on the liquid rocket engine performance. However, the flow features near the injector of liquid rocket engines are extremely complicated, for example supercritical-pressure spray, turbulent mixing, and chemical reactions are present. Previously, a homogeneous spray approach with a real-fluid property model was developed to account for the compressibility and evaporation effects such that thermodynamics properties of a mixture at a wide range of pressures and temperatures can be properly calculated, including liquid-phase, gas- phase, two-phase, and dense fluid regions. The developed homogeneous spray model demonstrated a good success in simulating uni- element shear coaxial injector spray combustion flows. However, the real-fluid model suffered a computational deficiency when applied to a pressure-based computational fluid dynamics (CFD) code. The deficiency is caused by the pressure and enthalpy being the independent variables in the solution procedure of a pressure-based code, whereas the real-fluid model utilizes density and temperature as independent variables. The objective of the present research work is to improve the computational efficiency of the real-fluid property model in computing thermal properties. The proposed approach is called an efficient real-fluid model, and the improvement of computational efficiency is achieved by using a combination of a liquid species and a gaseous species to represent a real-fluid species.

  1. Iterative combining rules for the van der Waals potentials of mixed rare gas systems

    NASA Astrophysics Data System (ADS)

    Wei, L. M.; Li, P.; Tang, K. T.

    2017-05-01

    An iterative procedure is introduced to make the results of some simple combining rules compatible with the Tang-Toennies potential model. The method is used to calculate the well locations Re and the well depths De of the van der Waals potentials of the mixed rare gas systems from the corresponding values of the homo-nuclear dimers. When the ;sizes; of the two interacting atoms are very different, several rounds of iteration are required for the results to converge. The converged results can be substantially different from the starting values obtained from the combining rules. However, if the sizes of the interacting atoms are close, only one or even no iteration is necessary for the results to converge. In either case, the converged results are the accurate descriptions of the interaction potentials of the hetero-nuclear dimers.

  2. Comparing nonparametric Bayesian tree priors for clonal reconstruction of tumors.

    PubMed

    Deshwar, Amit G; Vembu, Shankar; Morris, Quaid

    2015-01-01

    Statistical machine learning methods, especially nonparametric Bayesian methods, have become increasingly popular to infer clonal population structure of tumors. Here we describe the treeCRP, an extension of the Chinese restaurant process (CRP), a popular construction used in nonparametric mixture models, to infer the phylogeny and genotype of major subclonal lineages represented in the population of cancer cells. We also propose new split-merge updates tailored to the subclonal reconstruction problem that improve the mixing time of Markov chains. In comparisons with the tree-structured stick breaking prior used in PhyloSub, we demonstrate superior mixing and running time using the treeCRP with our new split-merge procedures. We also show that given the same number of samples, TSSB and treeCRP have similar ability to recover the subclonal structure of a tumor…

  3. 40 CFR 63.9520 - What procedures must I use to demonstrate initial compliance?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... the emission limitations in § 63.9500(a) and (b). (1) Record the date and time of each mix batch. (2) Record the identity of each mix batch using a unique batch ID, as defined in § 63.9565. (3) Measure and record the weight of HAP solvent loaded into the solvent mixer for each mix batch. (4) Measure and record...

  4. From Near-Neutral to Strongly Stratified: Adequately Modelling the Clear-Sky Nocturnal Boundary Layer at Cabauw.

    PubMed

    Baas, P; van de Wiel, B J H; van der Linden, S J A; Bosveld, F C

    2018-01-01

    The performance of an atmospheric single-column model (SCM) is studied systematically for stably-stratified conditions. To this end, 11 years (2005-2015) of daily SCM simulations were compared to observations from the Cabauw observatory, The Netherlands. Each individual clear-sky night was classified in terms of the ambient geostrophic wind speed with a [Formula: see text] bin-width. Nights with overcast conditions were filtered out by selecting only those nights with an average net radiation of less than [Formula: see text]. A similar procedure was applied to the observational dataset. A comparison of observed and modelled ensemble-averaged profiles of wind speed and potential temperature and time series of turbulent fluxes showed that the model represents the dynamics of the nocturnal boundary layer (NBL) at Cabauw very well for a broad range of mechanical forcing conditions. No obvious difference in model performance was found between near-neutral and strongly-stratified conditions. Furthermore, observed NBL regime transitions are represented in a natural way. The reference model version performs much better than a model version that applies excessive vertical mixing as is done in several (global) operational models. Model sensitivity runs showed that for weak-wind conditions the inversion strength depends much more on details of the land-atmosphere coupling than on the turbulent mixing. The presented results indicate that in principle the physical parametrizations of large-scale atmospheric models are sufficiently equipped for modelling stably-stratified conditions for a wide range of forcing conditions.

  5. From Near-Neutral to Strongly Stratified: Adequately Modelling the Clear-Sky Nocturnal Boundary Layer at Cabauw

    NASA Astrophysics Data System (ADS)

    Baas, P.; van de Wiel, B. J. H.; van der Linden, S. J. A.; Bosveld, F. C.

    2018-02-01

    The performance of an atmospheric single-column model (SCM) is studied systematically for stably-stratified conditions. To this end, 11 years (2005-2015) of daily SCM simulations were compared to observations from the Cabauw observatory, The Netherlands. Each individual clear-sky night was classified in terms of the ambient geostrophic wind speed with a 1 m s^{-1} bin-width. Nights with overcast conditions were filtered out by selecting only those nights with an average net radiation of less than - 30 W m^{-2}. A similar procedure was applied to the observational dataset. A comparison of observed and modelled ensemble-averaged profiles of wind speed and potential temperature and time series of turbulent fluxes showed that the model represents the dynamics of the nocturnal boundary layer (NBL) at Cabauw very well for a broad range of mechanical forcing conditions. No obvious difference in model performance was found between near-neutral and strongly-stratified conditions. Furthermore, observed NBL regime transitions are represented in a natural way. The reference model version performs much better than a model version that applies excessive vertical mixing as is done in several (global) operational models. Model sensitivity runs showed that for weak-wind conditions the inversion strength depends much more on details of the land-atmosphere coupling than on the turbulent mixing. The presented results indicate that in principle the physical parametrizations of large-scale atmospheric models are sufficiently equipped for modelling stably-stratified conditions for a wide range of forcing conditions.

  6. Estimation of Enthalpy of Formation of Liquid Transition Metal Alloys: A Modified Prescription Based on Macroscopic Atom Model of Cohesion

    NASA Astrophysics Data System (ADS)

    Raju, Subramanian; Saibaba, Saroja

    2016-09-01

    The enthalpy of formation Δo H f is an important thermodynamic quantity, which sheds significant light on fundamental cohesive and structural characteristics of an alloy. However, being a difficult one to determine accurately through experiments, simple estimation procedures are often desirable. In the present study, a modified prescription for estimating Δo H f L of liquid transition metal alloys is outlined, based on the Macroscopic Atom Model of cohesion. This prescription relies on self-consistent estimation of liquid-specific model parameters, namely electronegativity ( ϕ L) and bonding electron density ( n b L ). Such unique identification is made through the use of well-established relationships connecting surface tension, compressibility, and molar volume of a metallic liquid with bonding charge density. The electronegativity is obtained through a consistent linear scaling procedure. The preliminary set of values for ϕ L and n b L , together with other auxiliary model parameters, is subsequently optimized to obtain a good numerical agreement between calculated and experimental values of Δo H f L for sixty liquid transition metal alloys. It is found that, with few exceptions, the use of liquid-specific model parameters in Macroscopic Atom Model yields a physically consistent methodology for reliable estimation of mixing enthalpies of liquid alloys.

  7. An equivalent domain integral for analysis of two-dimensional mixed mode problems

    NASA Technical Reports Server (NTRS)

    Raju, I. S.; Shivakumar, K. N.

    1989-01-01

    An equivalent domain integral (EDI) method for calculating J-integrals for two-dimensional cracked elastic bodies subjected to mixed mode loading is presented. The total and product integrals consist of the sum of an area or domain integral and line integrals on the crack faces. The EDI method gave accurate values of the J-integrals for two mode I and two mixed mode problems. Numerical studies showed that domains consisting of one layer of elements are sufficient to obtain accurate J-integral values. Two procedures for separating the individual modes from the domain integrals are presented. The procedure that uses the symmetric and antisymmetric components of the stress and displacement fields to calculate the individual modes gave accurate values of the integrals for all the problems analyzed.

  8. Rapid learning curve assessment in an ex vivo training system for microincisional glaucoma surgery.

    PubMed

    Dang, Yalong; Waxman, Susannah; Wang, Chao; Parikh, Hardik A; Bussel, Igor I; Loewen, Ralitsa T; Xia, Xiaobo; Lathrop, Kira L; Bilonick, Richard A; Loewen, Nils A

    2017-05-09

    Increasing prevalence and cost of glaucoma have increased the demand for surgeons well trained in newer, microincisional surgery. These procedures occur in a highly confined space, making them difficult to learn by observation or assistance alone as is currently done. We hypothesized that our ex vivo outflow model is sensitive enough to allow computing individual learning curves to quantify progress and refine techniques. Seven trainees performed nine trabectome-mediated ab interno trabeculectomies in pig eyes (n = 63). An expert surgeon rated the procedure using an Operating Room Score (ORS). The extent of outflow beds accessed was measured with canalograms. Data was fitted using mixed effect models. ORS reached a half-maximum on an asymptote after only 2.5 eyes. Surgical time decreased by 1.4 minutes per eye in a linear fashion. The ablation arc followed an asymptotic function with a half-maximum inflection point after 5.3 eyes. Canalograms revealed that this progress did not correlate well with improvement in outflow, suggesting instead that about 30 eyes are needed for true mastery. This inexpensive pig eye model provides a safe and effective microsurgical training model and allows objective quantification of outcomes for the first time.

  9. Basic Aspects of Deep Soil Mixing Technology Control

    NASA Astrophysics Data System (ADS)

    Egorova, Alexandra A.; Rybak, Jarosław; Stefaniuk, Damian; Zajączkowski, Przemysław

    2017-10-01

    Improving a soil is a process of increasing its physical/mechanical properties without changing its natural structure. Improvement of soil subbase is reached by means of the knitted materials, or other methods when strong connection between soil particles is established. The method of DSM (Deep Soil Mixing) columns has been invented in Japan in 1970s. The main reason of designing cement-soil columns is to improve properties of local soils (such as strength and stiffness) by mixing them with various cementing materials. Cement and calcium are the most commonly used binders. However new research undertaken worldwide proves that apart from these materials, also gypsum or fly ashes can also be successfully implemented. As the Deep Soil Mixing is still being under development, anticipating mechanical properties of columns in particular soils and the usage of cementing materials in formed columns is very difficult and often inappropriate to predict. That is why a research is carried out in order to find out what binders and mixing technology should be used. The paper presents several remarks on the testing procedures related to quality and capacity control of Deep Soil Mixing columns. Soil improvement methods, their advantages and limitations are briefly described. The authors analyse the suitability of selected testing methods on subsequent stages of design and execution of special foundations works. Chosen examples from engineering practice form the basis for recommendations for the control procedures. Presented case studies concerning testing the on capacity field samples and laboratory procedures on various categories of soil-cement samples were picked from R&D and consulting works offered by Wroclaw University of Science and Technology. Special emphasis is paid to climate conditions which may affect the availability of performing and controlling of DSM techniques in polar zones, with a special regard to sample curing.

  10. Development of improved pavement rehabilitation procedures based on FWD backcalculation.

    DOT National Transportation Integrated Search

    2015-01-01

    Hot Mix Asphalt (HMA) overlays are among the most effective maintenance and rehabilitation : alternatives in improving the structural as well as functional performance of flexible pavements. HMA : overlay design procedures can be based on: (1) engine...

  11. Including mixed methods research in systematic reviews: examples from qualitative syntheses in TB and malaria control.

    PubMed

    Atkins, Salla; Launiala, Annika; Kagaha, Alexander; Smith, Helen

    2012-04-30

    Health policy makers now have access to a greater number and variety of systematic reviews to inform different stages in the policy making process, including reviews of qualitative research. The inclusion of mixed methods studies in systematic reviews is increasing, but these studies pose particular challenges to methods of review. This article examines the quality of the reporting of mixed methods and qualitative-only studies. We used two completed systematic reviews to generate a sample of qualitative studies and mixed method studies in order to make an assessment of how the quality of reporting and rigor of qualitative-only studies compares with that of mixed-methods studies. Overall, the reporting of qualitative studies in our sample was consistently better when compared with the reporting of mixed methods studies. We found that mixed methods studies are less likely to provide a description of the research conduct or qualitative data analysis procedures and less likely to be judged credible or provide rich data and thick description compared with standalone qualitative studies. Our time-related analysis shows that for both types of study, papers published since 2003 are more likely to report on the study context, describe analysis procedures, and be judged credible and provide rich data. However, the reporting of other aspects of research conduct (i.e. descriptions of the research question, the sampling strategy, and data collection methods) in mixed methods studies does not appear to have improved over time. Mixed methods research makes an important contribution to health research in general, and could make a more substantial contribution to systematic reviews. Through our careful analysis of the quality of reporting of mixed methods and qualitative-only research, we have identified areas that deserve more attention in the conduct and reporting of mixed methods research.

  12. Computer program for calculating the flow field of supersonic ejector nozzles

    NASA Technical Reports Server (NTRS)

    Anderson, B. H.

    1974-01-01

    An analytical procedure for computing the performance of supersonic ejector nozzles is presented. This procedure includes real sonic line effects and an interaction analysis for the mixing process between the two streams. The procedure is programmed in FORTRAN 4 and has operated successfully on IBM 7094, IBM 360, CDC 6600, and Univac 1108.

  13. 40 CFR Appendix E to Part 63 - Monitoring Procedure for Nonthoroughly Mixed Open Biological Treatment Systems at Kraft Pulp...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Systems at Kraft Pulp Mills Under Unsafe Sampling Conditions I. Purpose This procedure is required to be... conditions. The purpose of this procedure is to estimate the concentration of HAP within the open biological... zones with internal recycling between the units and assuming that two Monod biological kinetic...

  14. Performing concurrent operations in academic vascular neurosurgery does not affect patient outcomes.

    PubMed

    Zygourakis, Corinna C; Lee, Janelle; Barba, Julio; Lobo, Errol; Lawton, Michael T

    2017-11-01

    OBJECTIVE Concurrent surgeries, also known as "running two rooms" or simultaneous/overlapping operations, have recently come under intense scrutiny. The goal of this study was to evaluate the operative time and outcomes of concurrent versus nonconcurrent vascular neurosurgical procedures. METHODS The authors retrospectively reviewed 1219 procedures performed by 1 vascular neurosurgeon from 2012 to 2015 at the University of California, San Francisco. Data were collected on patient age, sex, severity of illness, risk of mortality, American Society of Anesthesiologists (ASA) status, procedure type, admission type, insurance, transfer source, procedure time, presence of resident or fellow in operating room (OR), number of co-surgeons, estimated blood loss (EBL), concurrent vs nonconcurrent case, severe sepsis, acute respiratory failure, postoperative stroke causing neurological deficit, unplanned return to OR, 30-day mortality, and 30-day unplanned readmission. For aneurysm clipping cases, data were also obtained on intraoperative aneurysm rupture and postoperative residual aneurysm. Chi-square and t-tests were performed to compare concurrent versus nonconcurrent cases, and then mixed-effects models were created to adjust for different procedure types, patient demographics, and clinical indicators between the 2 groups. RESULTS There was a significant difference in procedure type for concurrent (n = 828) versus nonconcurrent (n = 391) cases. Concurrent cases were more likely to be routine/elective admissions (53% vs 35%, p < 0.001) and physician referrals (59% vs 38%, p < 0.001). This difference in patient/case type was also reflected in the lower severity of illness, risk of death, and ASA class in the concurrent versus nonconcurrent cases (p < 0.01). Concurrent cases had significantly longer procedural times (243 vs 213 minutes) and more unplanned 30-day readmissions (5.7% vs 3.1%), but shorter mean length of hospital stay (11.2 vs 13.7 days), higher rates of discharge to home (66% vs 51%), lower 30-day mortality rates (3.1% vs 6.1%), lower rates of acute respiratory failure (4.3% vs 8.2%), and decreased 30-day unplanned returns to the OR (3.3% vs 6.9%; all p < 0.05). Rates of severe sepsis, postoperative stroke, intraoperative aneurysm rupture, and postoperative aneurysm residual were equivalent between the concurrent and nonconcurrent groups (all p values nonsignificant). Mixed-effects models showed that after controlling for procedure type, patient demographics, and clinical indicators, there was no significant difference in acute respiratory failure, severe sepsis, 30-day readmission, postoperative stroke, EBL, length of stay, discharge status, or intraoperative aneurysm rupture between concurrent and nonconcurrent cases. Unplanned return to the OR and 30-day mortality were significantly lower in concurrent cases (odds ratio 0.55, 95% confidence interval 0.31-0.98, p = 0.0431, and odds ratio 0.81, p < 0.001, respectively), but concurrent cases had significantly longer procedure durations (odds ratio 21.73; p < 0.001). CONCLUSIONS Overall, there was a significant difference in the types of concurrent versus nonconcurrent cases, with more routine/elective cases for less sick patients scheduled in an overlapping fashion. After adjusting for patient demographics, procedure type, and clinical indicators, concurrent cases had longer procedure times, but equivalent patient outcomes, as compared with nonconcurrent vascular neurosurgical procedures.

  15. Assessing total fungal concentrations on commercial passenger aircraft using mixed-effects modeling.

    PubMed

    McKernan, Lauralynn Taylor; Hein, Misty J; Wallingford, Kenneth M; Burge, Harriet; Herrick, Robert

    2008-01-01

    The primary objective of this study was to compare airborne fungal concentrations onboard commercial passenger aircraft at various in-flight times with concentrations measured inside and outside airport terminals. A secondary objective was to investigate the use of mixed-effects modeling of repeat measures from multiple sampling intervals and locations. Sequential triplicate culturable and total spore samples were collected on wide-body commercial passenger aircraft (n = 12) in the front and rear of coach class during six sampling intervals: boarding, midclimb, early cruise, midcruise, late cruise, and deplaning. Comparison samples were collected inside and outside airport terminals at the origin and destination cities. The MIXED procedure in SAS was used to model the mean and the covariance matrix of the natural log transformed fungal concentrations. Five covariance structures were tested to determine the appropriate models for analysis. Fixed effects considered included the sampling interval and, for samples obtained onboard the aircraft, location (front/rear of coach section), occupancy rate, and carbon dioxide concentrations. Overall, both total culturable and total spore fungal concentrations were low while the aircraft were in flight. No statistical difference was observed between measurements made in the front and rear sections of the coach cabin for either culturable or total spore concentrations. Both culturable and total spore concentrations were significantly higher outside the airport terminal compared with inside the airport terminal (p-value < 0.0001) and inside the aircraft (p-value < 0.0001). On the aircraft, the majority of total fungal exposure occurred during the boarding and deplaning processes, when the aircraft utilized ancillary ventilation and passenger activity was at its peak.

  16. Evaluation of Transverse Thermal Stresses in Composite Plates Based on First-Order Shear Deformation Theory

    NASA Technical Reports Server (NTRS)

    Rolfes, R.; Noor, A. K.; Sparr, H.

    1998-01-01

    A postprocessing procedure is presented for the evaluation of the transverse thermal stresses in laminated plates. The analytical formulation is based on the first-order shear deformation theory and the plate is discretized by using a single-field displacement finite element model. The procedure is based on neglecting the derivatives of the in-plane forces and the twisting moments, as well as the mixed derivatives of the bending moments, with respect to the in-plane coordinates. The calculated transverse shear stiffnesses reflect the actual stacking sequence of the composite plate. The distributions of the transverse stresses through-the-thickness are evaluated by using only the transverse shear forces and the thermal effects resulting from the finite element analysis. The procedure is implemented into a postprocessing routine which can be easily incorporated into existing commercial finite element codes. Numerical results are presented for four- and ten-layer cross-ply laminates subjected to mechanical and thermal loads.

  17. Polar versus Cartesian velocity models for maneuvering target tracking with IMM

    NASA Astrophysics Data System (ADS)

    Laneuville, Dann

    This paper compares various model sets in different IMM filters for the maneuvering target tracking problem. The aim is to see whether we can improve the tracking performance of what is certainly the most widely used model set in the literature for the maneuvering target tracking problem: a Nearly Constant Velocity model and a Nearly Coordinated Turn model. Our new challenger set consists of a mixed Cartesian position and polar velocity state vector to describe the uniform motion segments and is augmented with the turn rate to obtain the second model for the maneuvering segments. This paper also gives a general procedure to discretize up to second order any non-linear continuous time model with linear diffusion. Comparative simulations on an air defence scenario with a 2D radar, show that this new approach improves significantly the tracking performance in this case.

  18. Evolution of the concentration PDF in random environments modeled by global random walk

    NASA Astrophysics Data System (ADS)

    Suciu, Nicolae; Vamos, Calin; Attinger, Sabine; Knabner, Peter

    2013-04-01

    The evolution of the probability density function (PDF) of concentrations of chemical species transported in random environments is often modeled by ensembles of notional particles. The particles move in physical space along stochastic-Lagrangian trajectories governed by Ito equations, with drift coefficients given by the local values of the resolved velocity field and diffusion coefficients obtained by stochastic or space-filtering upscaling procedures. A general model for the sub-grid mixing also can be formulated as a system of Ito equations solving for trajectories in the composition space. The PDF is finally estimated by the number of particles in space-concentration control volumes. In spite of their efficiency, Lagrangian approaches suffer from two severe limitations. Since the particle trajectories are constructed sequentially, the demanded computing resources increase linearly with the number of particles. Moreover, the need to gather particles at the center of computational cells to perform the mixing step and to estimate statistical parameters, as well as the interpolation of various terms to particle positions, inevitably produce numerical diffusion in either particle-mesh or grid-free particle methods. To overcome these limitations, we introduce a global random walk method to solve the system of Ito equations in physical and composition spaces, which models the evolution of the random concentration's PDF. The algorithm consists of a superposition on a regular lattice of many weak Euler schemes for the set of Ito equations. Since all particles starting from a site of the space-concentration lattice are spread in a single numerical procedure, one obtains PDF estimates at the lattice sites at computational costs comparable with those for solving the system of Ito equations associated to a single particle. The new method avoids the limitations concerning the number of particles in Lagrangian approaches, completely removes the numerical diffusion, and speeds up the computation by orders of magnitude. The approach is illustrated for the transport of passive scalars in heterogeneous aquifers, with hydraulic conductivity modeled as a random field.

  19. The semi-diurnal cycle of dissipation in a ROFI: model-measurement comparisons

    NASA Astrophysics Data System (ADS)

    Simpson, John H.; Burchard, Hans; Fisher, Neil R.; Rippeth, Tom P.

    2002-07-01

    The Liverpool Bay Region of Freshwater Influence in the Irish Sea exhibits strong horizontal gradients which interact with the dominant tidal flow. A 25 h series of measurements of the cycle of turbulent dissipation with the FLY dissipation profiler shows a strong asymmetry between ebb and flood which is associated with a cycle of increasing stratification on the ebb and progressive mixing on the flood which results in vertical homogeneity as high water is approached. At this time strong dissipation extends throughout the water column in contrast to the ebb when there is a near shutdown of dissipation in the upper half of the column. The cycle of stratification and dissipation is closely consistent for the two semi-diurnal tidal cycles observed. We have attempted to simulate this situation, which involves a complex suite of processes including tidal straining and mixing, using a version of the k-ɛ closure scheme in a 1-d dynamical model which is forced by a combination of the observed tidal flow and horizontal temperature and salinity gradients. The latter were measured directly at the end of the observational series but, in order to focus on the cycle of dissipation, the correct reproduction of the temperature and salinity cycle can be assured by a nudging procedure which obliges the model temperature and salinity values to track the observations. With or without this procedure, the model gives a reasonable account of the dissipation and its asymmetric behaviour on ebb and flood although nudging improves the timing of peak dissipation in the upper part of the water column near highwater. The model has also been used to examine the ratio of shear production (P/ɛ) and buoyancy inputs to dissipation (B/ɛ). The variation of these quantities over the tidal cycle confirms the important role of convective motions forced by tidal straining near the end of the flood phase of the tide.

  20. A brief dataset on the model-based evaluation of the growth performance of Bacillus coagulans and l-lactic acid production in a lignin-supplemented medium.

    PubMed

    Glaser, Robert; Venus, Joachim

    2017-04-01

    The data presented in this article are related to the research article entitled "Model-based characterization of growth performance and l-lactic acid production with high optical purity by thermophilic Bacillus coagulans in a lignin-supplemented mixed substrate medium (R. Glaser and J. Venus, 2016) [1]". This data survey provides the information on characterization of three Bacillus coagulans strains. Information on cofermentation of lignocellulose-related sugars in lignin-containing media is given. Basic characterization data are supported by optical-density high-throughput screening and parameter adjustment to logistic growth models. Lab scale fermentation procedures are examined by model adjustment of a Monod kinetics-based growth model. Lignin consumption is analyzed using the data on decolorization of a lignin-supplemented minimal medium.

  1. Hot recycling of asphaltic concrete pavement : IR-15-3(8)121, Wildcat to Pine Creek

    DOT National Transportation Integrated Search

    1981-02-01

    There are various methods of pavement material recycling. This report is devoted to hot-mix plant recycling considerations and procedures. The several phases of the hot-mix recycling process are discussed separately, including removal and size reduct...

  2. Development of mix design procedures for gap-graded asphalt-rubber asphalt concrete

    DOT National Transportation Integrated Search

    2007-11-01

    A research project was conducted to identify and document current modifications to ARIZONA 815c (75-blow Marshall method) used to develop gap-graded asphalt rubber asphalt concrete (GG AR AC) mix designs, and to develop and test improvements to provi...

  3. 16 CFR 1209.5 - Test procedures for corrosiveness.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ..., combed, or otherwise mixed to reasonably assure homogeneity in the cellulose insulation test specimens... Evaluating Corrosion Test Specimens,” published by American Society for Testing and Materials, 1916 Race... acid (plus water). To avoid injury in this and subsequent techniques when mixing acid and water...

  4. 24 CFR 7.36 - Hearing.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... EMPLOYMENT OPPORTUNITY; POLICY, PROCEDURES AND PROGRAMS Equal Employment Opportunity Without Regard to Race... and the time frames for executing the right to request an administrative hearing. Note: Where a mixed... unless the MSPB has dismissed the mixed case complaint or appeal for jurisdictional reasons. (See 29 CFR...

  5. 24 CFR 7.36 - Hearing.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... EMPLOYMENT OPPORTUNITY; POLICY, PROCEDURES AND PROGRAMS Equal Employment Opportunity Without Regard to Race... and the time frames for executing the right to request an administrative hearing. Note: Where a mixed... unless the MSPB has dismissed the mixed case complaint or appeal for jurisdictional reasons. (See 29 CFR...

  6. Application of FTA technology to extraction of sperm DNA from mixed body fluids containing semen.

    PubMed

    Fujita, Yoshihiko; Kubo, Shin-ichi

    2006-01-01

    FTA technology is a novel method designed to simplify the collection, shipment, archiving and purification of nucleic acids from a wide variety of biological sources. In this study, we report a rapid and simple method of extracting DNA from sperm when body fluids mixed with semen were collected using FTA cards. After proteinase K digestion of the sperm and body fluid mixture, the washed pellet suspension as the sperm fraction and the concentrated supernatant as the epithelial cell fraction were respectively applied to FTA cards containing DTT. The FTA cards were dried, then directly added to a polymerase chain reaction (PCR) mix and processed by PCR. The time required from separation of the mixed fluid into sperm and epithelial origin DNA extractions was only about 2.5-3h. Furthermore, the procedure was extremely simple. It is considered that our designed DNA extraction procedure using an FTA card is available for application to routine work.

  7. Excimer laser correction of hyperopia, hyperopic and mixed astigmatism: past, present, and future.

    PubMed

    Lukenda, Adrian; Martinović, Zeljka Karaman; Kalauz, Miro

    2012-06-01

    The broad acceptance of "spot scanning" or "flying spot" excimer lasers in the last decade has enabled the domination of corneal ablative laser surgery over other refractive surgical procedures for the correction of hyperopia, hyperopic and mixed astigmatism. This review outlines the most important reasons why the ablative laser correction of hyperopia, hyperopic and mixed astigmatism for many years lagged behind that of myopia. Most of today's scanning laser systems, used in the LASIK and PRK procedures, can safely and effectively perform low, moderate and high hyperopic and hyperopic astigmatic corrections. The introduction of these laser platforms has also significantly improved the long term refractive stability of hyperopic treatments. In the future, further improvements in femtosecond and nanosecond technology, eye-tracker systems, and the development of new customized algorithms, such as the ray-tracing method, could additionally increase the upper limit for the safe and predictable corneal ablative laser correction ofhyperopia, hyperopic and mixed astigmatism.

  8. Robotic partial nephrectomy - Evaluation of the impact of case mix on the procedural learning curve.

    PubMed

    Roman, A; Ahmed, K; Challacombe, B

    2016-05-01

    Although Robotic partial nephrectomy (RPN) is an emerging technique for the management of small renal masses, this approach is technically demanding. To date, there is limited data on the nature and progression of the learning curve in RPN. To analyse the impact of case mix on the RPN LC and to model the learning curve. The records of the first 100 RPN performed, were analysed at our institution that were carried out by a single surgeon (B.C) (June 2010-December 2013). Cases were split based on their Preoperative Aspects and Dimensions Used for an Anatomical (PADUA) score into the following groups: 6-7, 8-9 and >10. Using a split group (20 patients in each group) and incremental analysis, the mean, the curve of best fit and R(2) values were calculated for each group. Of 100 patients (F:28, M:72), the mean age was 56.4 ± 11.9 years. The number of patients in each PADUA score groups: 6-7, 8-9 and >10 were 61, 32 and 7 respectively. An increase in incidence of more complex cases throughout the cohort was evident within the 8-9 group (2010: 1 case, 2013: 16 cases). The learning process did not significantly affect the proxies used to assess surgical proficiency in this study (operative time and warm ischaemia time). Case difficulty is an important parameter that should be considered when evaluating procedural learning curves. There is not one well fitting model that can be used to model the learning curve. With increasing experience, clinicians tend to operate on more difficult cases. Copyright © 2016 IJS Publishing Group Ltd. Published by Elsevier Ltd. All rights reserved.

  9. Novel Model of Somatosensory Nerve Transfer in the Rat.

    PubMed

    Paskal, Adriana M; Paskal, Wiktor; Pelka, Kacper; Podobinska, Martyna; Andrychowski, Jaroslaw; Wlodarski, Pawel K

    2018-05-09

    Nerve transfer (neurotization) is a reconstructive procedure in which the distal denervated nerve is joined with a proximal healthy nerve of a less significant function. Neurotization models described to date are limited to avulsed roots or pure motor nerve transfers, neglecting the clinically significant mixed nerve transfer. Our aim was to determine whether femoral-to-sciatic nerve transfer could be a feasible model of mixed nerve transfer. Three Sprague Dawley rats were subjected to unilateral femoral-to-sciatic nerve transfer. After 50 days, functional recovery was evaluated with a prick test. At the same time, axonal tracers were injected into each sciatic nerve distally to the lesion site, to determine nerve fibers' regeneration. In the prick test, the rats retracted their hind limbs after stimulation, although the reaction was moderately weaker on the operated side. Seven days after injection of axonal tracers, dyes were visualized by confocal microscopy in the spinal cord. Innervation of the recipient nerve originated from higher segments of the spinal cord than that on the untreated side. The results imply that the femoral nerve axons, ingrown into the damaged sciatic nerve, reinnervate distal targets with a functional outcome.

  10. Field samples of hot mix as an acceptance procedure : final report.

    DOT National Transportation Integrated Search

    1983-12-01

    Shifting the sampling site of asphalt concrete from the plant to the roadway necessitates a modification of the Marshall procedure. The effect of such as modification on the Marshall properties and resultant process levels in a Satistically Oriented ...

  11. Access disparities to Magnet hospitals for patients undergoing neurosurgical operations

    PubMed Central

    Missios, Symeon; Bekelis, Kimon

    2017-01-01

    Background Centers of excellence focusing on quality improvement have demonstrated superior outcomes for a variety of surgical interventions. We investigated the presence of access disparities to hospitals recognized by the Magnet Recognition Program of the American Nurses Credentialing Center (ANCC) for patients undergoing neurosurgical operations. Methods We performed a cohort study of all neurosurgery patients who were registered in the New York Statewide Planning and Research Cooperative System (SPARCS) database from 2009–2013. We examined the association of African-American race and lack of insurance with Magnet status hospitalization for neurosurgical procedures. A mixed effects propensity adjusted multivariable regression analysis was used to control for confounding. Results During the study period, 190,535 neurosurgical patients met the inclusion criteria. Using a multivariable logistic regression, we demonstrate that African-Americans had lower admission rates to Magnet institutions (OR 0.62; 95% CI, 0.58–0.67). This persisted in a mixed effects logistic regression model (OR 0.77; 95% CI, 0.70–0.83) to adjust for clustering at the patient county level, and a propensity score adjusted logistic regression model (OR 0.75; 95% CI, 0.69–0.82). Additionally, lack of insurance was associated with lower admission rates to Magnet institutions (OR 0.71; 95% CI, 0.68–0.73), in a multivariable logistic regression model. This persisted in a mixed effects logistic regression model (OR 0.72; 95% CI, 0.69–0.74), and a propensity score adjusted logistic regression model (OR 0.72; 95% CI, 0.69–0.75). Conclusions Using a comprehensive all-payer cohort of neurosurgery patients in New York State we identified an association of African-American race and lack of insurance with lower rates of admission to Magnet hospitals. PMID:28684152

  12. [Surgical treatment for morbid obesity].

    PubMed

    Pablo-Pantoja, Juan

    2004-01-01

    Obesity has become a serious public health problem in Mexico and at present time and the best treatment for morbid obesity is surgery. Recently, laparoscopic techniques have become available for treatment of this disease. Surgery is indicated in patients with body mass index (BMI) >35 kg/m2, and with comorbidity. Restrictive procedures such as adjustable gastric banding and vertical banded gastroplasty have less incidence of postoperative complications; however efficacy in terms of weight loss is not as good as in malabsorptive or mixed procedures. Patients who undergo these malabsorptive or mixed procedures (gastric bypass, biliopancreatic diversion) are at higher risk for postoperative complication. To date, gastric bypass is considered the care standard for treatment of morbid obesity; it confers an approximately 70% of body-weight-loss excess, with an acceptable rate of complications.

  13. An investigation of the uniform random number generator

    NASA Technical Reports Server (NTRS)

    Temple, E. C.

    1982-01-01

    Most random number generators that are in use today are of the congruential form X(i+1) + AX(i) + C mod M where A, C, and M are nonnegative integers. If C=O, the generator is called the multiplicative type and those for which C/O are called mixed congruential generators. It is easy to see that congruential generators will repeat a sequence of numbers after a maximum of M values have been generated. The number of numbers that a procedure generates before restarting the sequence is called the length or the period of the generator. Generally, it is desirable to make the period as long as possible. A detailed discussion of congruential generators is given. Also, several promising procedures that differ from the multiplicative and mixed procedure are discussed.

  14. A coupled chemo-thermo-hygro-mechanical model of concrete at high temperature and failure analysis

    NASA Astrophysics Data System (ADS)

    Li, Xikui; Li, Rongtao; Schrefler, B. A.

    2006-06-01

    A hierarchical mathematical model for analyses of coupled chemo-thermo-hygro-mechanical behaviour in concretes at high temperature is presented. The concretes are modelled as unsaturated deforming reactive porous media filled with two immiscible pore fluids, i.e. the gas mixture and the liquid mixture, in immiscible-miscible levels. The thermo-induced desalination process is particularly integrated into the model. The chemical effects of both the desalination and the dehydration processes on the material damage and the degradation of the material strength are taken into account. The mathematical model consists of a set of coupled, partial differential equations governing the mass balance of the dry air, the mass balance of the water species, the mass balance of the matrix components dissolved in the liquid phases, the enthalpy (energy) balance and momentum balance of the whole medium mixture. The governing equations, the state equations for the model and the constitutive laws used in the model are given. A mixed weak form for the finite element solution procedure is formulated for the numerical simulation of chemo-thermo-hygro-mechanical behaviours. Special considerations are given to spatial discretization of hyperbolic equation with non-self-adjoint operator nature. Numerical results demonstrate the performance and the effectiveness of the proposed model and its numerical procedure in reproducing coupled chemo-thermo-hygro-mechanical behaviour in concretes subjected to fire and thermal radiation.

  15. Ocean Turbulence. Paper 2; One-Point Closure Model Momentum, Heat and Salt Vertical Diffusivities in the Presence of Shear

    NASA Technical Reports Server (NTRS)

    Canuto, V. M.; Howard, A.; Cheng, Y.; Dubovikov, M. S.

    1999-01-01

    We develop and test a 1-point closure turbulence model with the following features: 1) we include the salinity field and derive the expression for the vertical turbulent diffusivities of momentum K(sub m) , heat K(sub h) and salt K(sub s) as a function of two stability parameters: the Richardson number R(sub i) (stratification vs. shear) and the Turner number R(sub rho) (salinity gradient vs. temperature gradient). 2) to describe turbulent mixing below the mixed layer (ML), all previous models have adopted three adjustable "background diffusivities" for momentum, heat and salt. We propose a model that avoids such adjustable diffusivities. We assume that below the ML, the three diffusivities have the same functional dependence on R( sub i) and R(sub rho) as derived from the turbulence model. However, in order to compute R(sub i) below the ML, we use data of vertical shear due to wave-breaking.measured by Gargett et al. The procedure frees the model from adjustable background diffusivities and indeed we employ the same model throughout the entire vertical extent of the ocean. 3) in the local model, the turbulent diffusivities K(sub m,h,s) are given as analytical functions of R(sub i) and R(sub rho). 5) the model is used in an O-GCM and several results are presented to exhibit the effect of double diffusion processes. 6) the code is available upon request.

  16. Development and Application of Benchmark Examples for Mixed-Mode I/II Quasi-Static Delamination Propagation Predictions

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald

    2012-01-01

    The development of benchmark examples for quasi-static delamination propagation prediction is presented. The example is based on a finite element model of the Mixed-Mode Bending (MMB) specimen for 50% mode II. The benchmarking is demonstrated for Abaqus/Standard, however, the example is independent of the analysis software used and allows the assessment of the automated delamination propagation prediction capability in commercial finite element codes based on the virtual crack closure technique (VCCT). First, a quasi-static benchmark example was created for the specimen. Second, starting from an initially straight front, the delamination was allowed to propagate under quasi-static loading. Third, the load-displacement as well as delamination length versus applied load/displacement relationships from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. The benchmarking procedure proved valuable by highlighting the issues associated with choosing the input parameters of the particular implementation. Overall, the results are encouraging, but further assessment for mixed-mode delamination fatigue onset and growth is required.

  17. Finding fixed satellite service orbital allotments with a k-permutation algorithm

    NASA Technical Reports Server (NTRS)

    Reilly, Charles H.; Mount-Campbell, Clark A.; Gonsalvez, David J. A.

    1990-01-01

    A satellite system synthesis problem, the satellite location problem (SLP), is addressed. In SLP, orbital locations (longitudes) are allotted to geostationary satellites in the fixed satellite service. A linear mixed-integer programming model is presented that views SLP as a combination of two problems: the problem of ordering the satellites and the problem of locating the satellites given some ordering. A special-purpose heuristic procedure, a k-permutation algorithm, has been developed to find solutions to SLPs. Solutions to small sample problems are presented and analyzed on the basis of calculated interferences.

  18. A mixing timescale model for TPDF simulations of turbulent premixed flames

    DOE PAGES

    Kuron, Michael; Ren, Zhuyin; Hawkes, Evatt R.; ...

    2017-02-06

    Transported probability density function (TPDF) methods are an attractive modeling approach for turbulent flames as chemical reactions appear in closed form. However, molecular micro-mixing needs to be modeled and this modeling is considered a primary challenge for TPDF methods. In the present study, a new algebraic mixing rate model for TPDF simulations of turbulent premixed flames is proposed, which is a key ingredient in commonly used molecular mixing models. The new model aims to properly account for the transition in reactive scalar mixing rate behavior from the limit of turbulence-dominated mixing to molecular mixing behavior in flamelets. An a priorimore » assessment of the new model is performed using direct numerical simulation (DNS) data of a lean premixed hydrogen–air jet flame. The new model accurately captures the mixing timescale behavior in the DNS and is found to be a significant improvement over the commonly used constant mechanical-to-scalar mixing timescale ratio model. An a posteriori TPDF study is then performed using the same DNS data as a numerical test bed. The DNS provides the initial conditions and time-varying input quantities, including the mean velocity, turbulent diffusion coefficient, and modeled scalar mixing rate for the TPDF simulations, thus allowing an exclusive focus on the mixing model. Here, the new mixing timescale model is compared with the constant mechanical-to-scalar mixing timescale ratio coupled with the Euclidean Minimum Spanning Tree (EMST) mixing model, as well as a laminar flamelet closure. It is found that the laminar flamelet closure is unable to properly capture the mixing behavior in the thin reaction zones regime while the constant mechanical-to-scalar mixing timescale model under-predicts the flame speed. Furthermore, the EMST model coupled with the new mixing timescale model provides the best prediction of the flame structure and flame propagation among the models tested, as the dynamics of reactive scalar mixing across different flame regimes are appropriately accounted for.« less

  19. A mixing timescale model for TPDF simulations of turbulent premixed flames

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuron, Michael; Ren, Zhuyin; Hawkes, Evatt R.

    Transported probability density function (TPDF) methods are an attractive modeling approach for turbulent flames as chemical reactions appear in closed form. However, molecular micro-mixing needs to be modeled and this modeling is considered a primary challenge for TPDF methods. In the present study, a new algebraic mixing rate model for TPDF simulations of turbulent premixed flames is proposed, which is a key ingredient in commonly used molecular mixing models. The new model aims to properly account for the transition in reactive scalar mixing rate behavior from the limit of turbulence-dominated mixing to molecular mixing behavior in flamelets. An a priorimore » assessment of the new model is performed using direct numerical simulation (DNS) data of a lean premixed hydrogen–air jet flame. The new model accurately captures the mixing timescale behavior in the DNS and is found to be a significant improvement over the commonly used constant mechanical-to-scalar mixing timescale ratio model. An a posteriori TPDF study is then performed using the same DNS data as a numerical test bed. The DNS provides the initial conditions and time-varying input quantities, including the mean velocity, turbulent diffusion coefficient, and modeled scalar mixing rate for the TPDF simulations, thus allowing an exclusive focus on the mixing model. Here, the new mixing timescale model is compared with the constant mechanical-to-scalar mixing timescale ratio coupled with the Euclidean Minimum Spanning Tree (EMST) mixing model, as well as a laminar flamelet closure. It is found that the laminar flamelet closure is unable to properly capture the mixing behavior in the thin reaction zones regime while the constant mechanical-to-scalar mixing timescale model under-predicts the flame speed. Furthermore, the EMST model coupled with the new mixing timescale model provides the best prediction of the flame structure and flame propagation among the models tested, as the dynamics of reactive scalar mixing across different flame regimes are appropriately accounted for.« less

  20. Laboratory performance evaluation of CIR-emulsion and its comparison against CIR-foam test results from phase III.

    DOT National Transportation Integrated Search

    2009-12-01

    Currently, no standard mix design procedure is available for CIR-emulsion in Iowa. The CIR-foam mix : design process developed during the previous phase is applied for CIR-emulsion mixtures with varying : emulsified asphalt contents. Dynamic modulus ...

  1. The Complexities of Teachers' Commitment to Environmental Education: A Mixed Methods Approach

    ERIC Educational Resources Information Center

    Sosu, Edward M.; McWilliam, Angus; Gray, Donald S.

    2008-01-01

    This article argues that a mixed methods approach is useful in understanding the complexity that underlies teachers' commitment to environmental education. Using sequential and concurrent procedures, the authors demonstrate how different methodological approaches highlighted different aspects of teacher commitment. The quantitative survey examined…

  2. Dynamic trends in cardiac surgery: why the logistic EuroSCORE is no longer suitable for contemporary cardiac surgery and implications for future risk models

    PubMed Central

    Hickey, Graeme L.; Grant, Stuart W.; Murphy, Gavin J.; Bhabra, Moninder; Pagano, Domenico; McAllister, Katherine; Buchan, Iain; Bridgewater, Ben

    2013-01-01

    OBJECTIVES Progressive loss of calibration of the original EuroSCORE models has necessitated the introduction of the EuroSCORE II model. Poor model calibration has important implications for clinical decision-making and risk adjustment of governance analyses. The objective of this study was to explore the reasons for the calibration drift of the logistic EuroSCORE. METHODS Data from the Society for Cardiothoracic Surgery in Great Britain and Ireland database were analysed for procedures performed at all National Health Service and some private hospitals in England and Wales between April 2001 and March 2011. The primary outcome was in-hospital mortality. EuroSCORE risk factors, overall model calibration and discrimination were assessed over time. RESULTS A total of 317 292 procedures were included. Over the study period, mean age at surgery increased from 64.6 to 67.2 years. The proportion of procedures that were isolated coronary artery bypass grafts decreased from 67.5 to 51.2%. In-hospital mortality fell from 4.1 to 2.8%, but the mean logistic EuroSCORE increased from 5.6 to 7.6%. The logistic EuroSCORE remained a good discriminant throughout the study period (area under the receiver-operating characteristic curve between 0.79 and 0.85), but calibration (observed-to-expected mortality ratio) fell from 0.76 to 0.37. Inadequate adjustment for decreasing baseline risk affected calibration considerably. DISCUSSIONS Patient risk factors and case-mix in adult cardiac surgery change dynamically over time. Models like the EuroSCORE that are developed using a ‘snapshot’ of data in time do not account for this and can subsequently lose calibration. It is therefore important to regularly revalidate clinical prediction models. PMID:23152436

  3. Jointly modeling longitudinal proportional data and survival times with an application to the quality of life data in a breast cancer trial.

    PubMed

    Song, Hui; Peng, Yingwei; Tu, Dongsheng

    2017-04-01

    Motivated by the joint analysis of longitudinal quality of life data and recurrence free survival times from a cancer clinical trial, we present in this paper two approaches to jointly model the longitudinal proportional measurements, which are confined in a finite interval, and survival data. Both approaches assume a proportional hazards model for the survival times. For the longitudinal component, the first approach applies the classical linear mixed model to logit transformed responses, while the second approach directly models the responses using a simplex distribution. A semiparametric method based on a penalized joint likelihood generated by the Laplace approximation is derived to fit the joint model defined by the second approach. The proposed procedures are evaluated in a simulation study and applied to the analysis of breast cancer data motivated this research.

  4. Mixing of thawed coagulation samples prior to testing: Is any technique better than another?

    PubMed

    Lima-Oliveira, Gabriel; Adcock, Dorothy M; Salvagno, Gian Luca; Favaloro, Emmanuel J; Lippi, Giuseppe

    2016-12-01

    Thus study was aimed to investigate whether the mixing technique could influence the results of routine and specialized clotting tests on post-thawed specimens. The sample population consisted of 13 healthy volunteers. Venous blood was collected by evacuated system into three 3.5mL tubes containing 0.109mmol/L buffered sodium citrate. The three blood tubes of each subject were pooled immediately after collection inside a Falcon 15mL tube, then mixed by 6 gentle end-over-end inversions, and centrifuged at 1500g for 15min. Plasma-pool of each subject was then divided in 4 identical aliquots. All aliquots were thawed after 2-day freezing -70°C. Immediately afterwards, the plasma of the four paired aliquots were treated using four different techniques: (a) reference procedure, entailing 6 gentle end-over-end inversions; (b) placing the sample on a blood tube rocker (i.e., rotor mixing) for 5min to induce agitation and mixing; (c) use of a vortex mixer for 20s to induce agitation and mixing; and (d) no mixing. The significance of differences against the reference technique for mixing thawed plasma specimens (i.e., 6 gentle end-over-end inversions) were assessed with paired Student's t-test. The statistical significance was set at p<0.05. As compared to the reference 6-time gentle inversion technique, statistically significant differences were only observed for fibrinogen, and factor VIII in plasma mixed on tube rocker. Some trends were observed in the remaining other cases, but the bias did not achieve statistical significance. We hence suggest that each laboratory should standardize the procedures for mixing of thawed plasma according to a single technique. Copyright © 2016 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  5. Post Accident Procedures for Chemicals and Propellants.

    DTIC Science & Technology

    1982-09-01

    METHODS AND PROCEDURES ............ 4-1 4.1 Overview of Emergency Response Procedures " and Resources Available .......................... 4-1 L1 TABLE...7-1 7.1 Criteria forTwelve Critical Operations ........................ 7-1 7.1.1 On-Scene Methods for Identifying the Ingredients...Establishing A Protocol for Selecting the Hazards Mitigation and Cleanup Methods for Single Material Spills and Multiple Materials Mixing

  6. The application of mixed methods designs to trauma research.

    PubMed

    Creswell, John W; Zhang, Wanqing

    2009-12-01

    Despite the use of quantitative and qualitative data in trauma research and therapy, mixed methods studies in this field have not been analyzed to help researchers designing investigations. This discussion begins by reviewing four core characteristics of mixed methods research in the social and human sciences. Combining these characteristics, the authors focus on four select mixed methods designs that are applicable in trauma research. These designs are defined and their essential elements noted. Applying these designs to trauma research, a search was conducted to locate mixed methods trauma studies. From this search, one sample study was selected, and its characteristics of mixed methods procedures noted. Finally, drawing on other mixed methods designs available, several follow-up mixed methods studies were described for this sample study, enabling trauma researchers to view design options for applying mixed methods research in trauma investigations.

  7. Rural-urban differences in dental service use among children enrolled in a private dental insurance plan in Wisconsin: analysis of administrative data.

    PubMed

    Bhagavatula, Pradeep; Xiang, Qun; Szabo, Aniko; Eichmiller, Fredrick; Kuthy, Raymond A; Okunseri, Christopher E

    2012-12-21

    Studies on rural-urban differences in dental care have primarily focused on differences in utilization rates and preventive dental services. Little is known about rural-urban differences in the use of wider range of dental procedures. This study examined patterns of preventive, restorative, endodontic, and extraction procedures provided to children enrolled in Delta Dental of Wisconsin (DDWI). We analyzed DDWI enrollment and claims data for children aged 0-18 years from 2002 to 2008. We modified and used a rural and urban classification based on ZIP codes developed by the Wisconsin Area Health Education Center (AHEC). We categorized the ZIP codes into 6 AHEC categories (3 rural and 3 urban). Descriptive and multivariable analysis using generalized linear mixed models (GLMM) were used to examine the patterns of dental procedures provided to children. Tukey-Kramer adjustment was used to control for multiple comparisons. Approximately, 50%, 67% and 68% of enrollees in inner-city Milwaukee, Rural 1 (less than 2500 people), and suburban-Milwaukee had at least one annual dental visit, respectively. Children in inner city-Milwaukee had the lowest utilization rates for all procedures examined, except for endodontic procedures. Compared to children from inner-city Milwaukee, children in other locations had significantly more preventive procedures. Children in Rural 1-ZIP codes had more restorative, endodontic and extraction procedures, compared to children from all other regions. We found significant geographic variation in dental procedures received by children enrolled in DDWI.

  8. Polymer modified asphalt in hot mix pavement : interim report, executive summary.

    DOT National Transportation Integrated Search

    1988-11-01

    This report presents a summary of a literature review to determine the most appropriate testing procedures for use with polymer modified asphalts. In examining testing procedures, it was necessary to study the effects of polymer modifiers on both bin...

  9. Tested Demonstrations.

    ERIC Educational Resources Information Center

    Gilbert, George L., Ed.

    1983-01-01

    Presents background information and procedures for producing sequential color reactions. With proper selection of pH indicator dyes, most school colors can be produced after mixing colorless solutions. Procedures for producing white/purple and red/white/blue colors are outlined. Also outlines preparation of 2, 4, 2', 4',…

  10. Models for nearly every occasion: Part I - One box models.

    PubMed

    Hewett, Paul; Ganser, Gary H

    2017-01-01

    The standard "well mixed room," "one box" model cannot be used to predict occupational exposures whenever the scenario involves the use of local controls. New "constant emission" one box models are proposed that permit either local exhaust or local exhaust with filtered return, coupled with general room ventilation or the recirculation of a portion of the general room exhaust. New "two box" models are presented in Part II of this series. Both steady state and transient models were developed. The steady state equation for each model, including the standard one box steady state model, is augmented with an additional factor reflecting the fraction of time the substance was generated during each task. This addition allows the easy calculation of the average exposure for cyclic and irregular emission patterns, provided the starting and ending concentrations are zero or near zero, or the cumulative time across all tasks is long (e.g., several tasks to a full shift). The new models introduce additional variables, such as the efficiency of the local exhaust to immediately capture freshly generated contaminant and the filtration efficiency whenever filtered exhaust is returned to the workspace. Many of the model variables are knowable (e.g., room volume and ventilation rate). A structured procedure for calibrating a model to a work scenario is introduced that can be applied to both continuous and cyclic processes. The "calibration" procedure generates estimates of the generation rate and all of remaining unknown model variables.

  11. Remission of type 2 diabetes mellitus after bariatric surgery - comparison between procedures.

    PubMed

    Fernández-Soto, María L; Martín-Leyva, Ana; González-Jiménez, Amalia; García-Rubio, Jesús; Cózar-Ibáñez, Antonio; Zamora-Camacho, Francisco J; Leyva-Martínez, María S; Jiménez-Ríos, Jose A; Escobar-Jiménez, Fernándo

    2017-01-01

    We aimed to assess the mid-term type 2 diabetes mellitus recovery patterns in morbidly obese patients by comparing some relevant physiological parameters of patients of bariatric surgery between two types of surgical procedures: mixed (roux-en-Y gastric bypass and biliopancreatic diversion) and restrictive (sleeve gastrectomy). This is a prospective and observational study of co-morbid, type 2 diabetes mellitus evolution in 49 morbidly obese patients: 37 underwent mixed surgery procedures and 12 a restrictive surgery procedure. We recorded weight, height, body mass index, and glycaemic, lipid, and nutritional blood parameters, prior to procedure, as well as six and twelve months post-operatively. In addition, we tested for differences in patient recovery and investigated predictive factors in diabetes remission. Both glycaemic and lipid profiles diminished significantly to healthy levels by 6 and 12 months post intervention. Type 2 diabetes mellitus showed remission in more than 80% of patients of both types of surgical procedures, with no difference between them. Baseline body mass index, glycated haemoglobin, and insulin intake, among others, were shown to be valuable predictors of diabetes remission one year after the intervention. The choice of the type of surgical procedure did not significantly affect the remission rate of type 2 diabetes mellitus in morbidly obese patients. (Endokrynol Pol 2017; 68 (1): 18-25).

  12. Effect of splitting a mixed-model line on shortening the line length under open- and closed-boundary working area settings

    NASA Astrophysics Data System (ADS)

    Zhang, Donghao; Matsuura, Haruki; Asada, Akiko

    2017-04-01

    Some automobile factories have segmented mixed-model production lines into shorter sub-lines according to part group, such as engine, trim, and powertrain. The effects of splitting a line into sub-lines have been reported from the standpoints of worker motivation, productivity improvement, and autonomy based on risk spreading. There has been no mention of the possibility of shortening the line length by altering the product sequence using sub-lines. The purpose of the present paper is to determine the conditions under which sub-lines reduce the line length and the degree to which the line length may be shortened. The line lengths for a non-split line and a line that has been split into sub-lines are compared using three methods for determining the working area, the standard closed boundary, the optimized open boundary, and real-life constant-length stations. The results are discussed by analyzing the upper and lower bounds of the line length. Based on these results, a procedure for deciding whether or not to split a production line is proposed.

  13. Walking through the statistical black boxes of plant breeding.

    PubMed

    Xavier, Alencar; Muir, William M; Craig, Bruce; Rainey, Katy Martin

    2016-10-01

    The main statistical procedures in plant breeding are based on Gaussian process and can be computed through mixed linear models. Intelligent decision making relies on our ability to extract useful information from data to help us achieve our goals more efficiently. Many plant breeders and geneticists perform statistical analyses without understanding the underlying assumptions of the methods or their strengths and pitfalls. In other words, they treat these statistical methods (software and programs) like black boxes. Black boxes represent complex pieces of machinery with contents that are not fully understood by the user. The user sees the inputs and outputs without knowing how the outputs are generated. By providing a general background on statistical methodologies, this review aims (1) to introduce basic concepts of machine learning and its applications to plant breeding; (2) to link classical selection theory to current statistical approaches; (3) to show how to solve mixed models and extend their application to pedigree-based and genomic-based prediction; and (4) to clarify how the algorithms of genome-wide association studies work, including their assumptions and limitations.

  14. Experimental Effects and Individual Differences in Linear Mixed Models: Estimating the Relationship between Spatial, Object, and Attraction Effects in Visual Attention

    PubMed Central

    Kliegl, Reinhold; Wei, Ping; Dambacher, Michael; Yan, Ming; Zhou, Xiaolin

    2011-01-01

    Linear mixed models (LMMs) provide a still underused methodological perspective on combining experimental and individual-differences research. Here we illustrate this approach with two-rectangle cueing in visual attention (Egly et al., 1994). We replicated previous experimental cue-validity effects relating to a spatial shift of attention within an object (spatial effect), to attention switch between objects (object effect), and to the attraction of attention toward the display centroid (attraction effect), also taking into account the design-inherent imbalance of valid and other trials. We simultaneously estimated variance/covariance components of subject-related random effects for these spatial, object, and attraction effects in addition to their mean reaction times (RTs). The spatial effect showed a strong positive correlation with mean RT and a strong negative correlation with the attraction effect. The analysis of individual differences suggests that slow subjects engage attention more strongly at the cued location than fast subjects. We compare this joint LMM analysis of experimental effects and associated subject-related variances and correlations with two frequently used alternative statistical procedures. PMID:21833292

  15. Pilot Subjective Assessments During an Investigation of Separation Function Allocation Using a Human-In-The-Loop Simulation

    NASA Technical Reports Server (NTRS)

    Burke, Kelly A.; Wing, David J.; Lewis, Timothy

    2013-01-01

    Two human-in-the-loop simulation experiments were conducted to investigate allocation of separation assurance functions between ground and air and between humans and automation. The experiments modeled a mixed-operations concept in which aircraft receiving ground-based separation services shared the airspace with aircraft providing their own separation service (i.e., self-separation). The two experiments, one pilot-focused and the other controller-focused, addressed selected key issues of mixed operations and modeling an emergence of NextGen technologies and procedures. This paper focuses on the results of the subjective assessments of pilots collected during the pilot-focused human-in-the-loop simulation, specifically workload and situation awareness. Generally the results revealed that across all conditions, pilots' perceived workload was low to medium, with the highest reported levels of workload occurring when the pilots experienced a loss of separation during the scenario. Furthermore, the results from the workload data and situation awareness data were complimentary such that when pilots reported lower levels of workload they also experienced higher levels of situation awareness.

  16. On the mixing and evaporation of secondary organic aerosol components.

    PubMed

    Loza, Christine L; Coggon, Matthew M; Nguyen, Tran B; Zuend, Andreas; Flagan, Richard C; Seinfeld, John H

    2013-06-18

    The physical state and chemical composition of an organic aerosol affect its degree of mixing and its interactions with condensing species. We present here a laboratory chamber procedure for studying the effect of the mixing of organic aerosol components on particle evaporation. The procedure is applied to the formation of secondary organic aerosol (SOA) from α-pinene and toluene photooxidation. SOA evaporation is induced by heating the chamber aerosol from room temperature (25 °C) to 42 °C over 7 h and detected by a shift in the peak diameter of the SOA size distribution. With this protocol, α-pinene SOA is found to be more volatile than toluene SOA. When SOA is formed from the two precursors sequentially, the evaporation behavior of the SOA most closely resembles that of SOA from the second parent hydrocarbon, suggesting that the structure of the mixed SOA resembles a core of SOA from the initial precursor coated by a layer of SOA from the second precursor. Such a core-and-shell configuration of the organic aerosol phases implies limited mixing of the SOA from the two precursors on the time scale of the experiments, consistent with a high viscosity of at least one of the phases.

  17. Comparison of Patient Outcomes in 3725 Overlapping vs 3633 Nonoverlapping Neurosurgical Procedures Using a Single Institution's Clinical and Administrative Database.

    PubMed

    Zygourakis, Corinna C; Keefe, Malla; Lee, Janelle; Barba, Julio; McDermott, Michael W; Mummaneni, Praveen V; Lawton, Michael T

    2017-02-01

    Overlapping surgery is a common practice to improve surgical efficiency, but there are limited data on its safety. To analyze the patient outcomes of overlapping vs nonoverlapping surgeries performed by multiple neurosurgeons. Retrospective review of 7358 neurosurgical procedures, 2012 to 2015, at an urban academic hospital. Collected variables: patient age, gender, insurance, American Society of Anesthesiologists score, severity of illness, mortality risk, admission type, transfer source, procedure type, surgery date, number of cosurgeons, presence of neurosurgery resident/fellow/another attending, and overlapping vs nonoverlapping surgery. Outcomes: procedure time, length of stay, estimated blood loss, discharge location, 30-day mortality, 30-day readmission, return to operating room, acute respiratory failure, and severe sepsis. Statistics: univariate, then multivariate mixed-effect models. Overlapping surgery patients (n = 3725) were younger and had lower American Society of Anesthesiologists scores, severity of illness, and mortality risk (P < .0001) than nonoverlapping surgery patients (n = 3633). Overlapping surgeries had longer procedure times (214 vs 172 min; P < .0001), but shorter length of stay (7.3 vs 7.9 d; P = .010) and lower estimated blood loss (312 vs 363 mL’s; P = .003). Overlapping surgery patients were more likely to be discharged home (73.6% vs 66.2%; P < .0001), and had lower mortality rates (1.3% vs 2.5%; P = .0005) and acute respiratory failure (1.8% vs 2.6%; P = .021). In multivariate models, there was no significant difference between overlapping and nonoverlapping surgeries for any patient outcomes, except for procedure duration, which was longer in overlapping surgery (estimate = 23.03; P < .001). When planned appropriately, overlapping surgery can be performed safely within the infrastructure at our academic institution. Copyright © 2017 by the Congress of Neurological Surgeons

  18. A stochastic particle method for the investigation of turbulence/chemistry interactions in large-eddy simulations of turbulent reacting flows

    NASA Astrophysics Data System (ADS)

    Ferrero, Pietro

    The main objective of this work is to investigate the effects of the coupling between the turbulent fluctuations and the highly non-linear chemical source terms in the context of large-eddy simulations of turbulent reacting flows. To this aim we implement the filtered mass density function (FMDF) methodology on an existing finite volume (FV) fluid dynamics solver. The FMDF provides additional statistical sub-grid scale (SGS) information about the thermochemical state of the flow - species mass fractions and enthalpy - which would not be available otherwise. The core of the methodology involves solving a transport equation for the FMDF by means of a stochastic, grid-free, Lagrangian particle procedure. Any moments of the distribution can be obtained by taking ensemble averages of the particles. The main advantage of this strategy is that the chemical source terms appear in closed form so that the effects of turbulent fluctuations on these terms are already accounted for and do not need to be modeled. We first validate and demonstrate the consistency of our implementation by comparing the results of the hybrid FV/FMDF procedure against model-free LES for temporally developing, non-reacting mixing layers. Consistency requires that, for non-reacting cases, the two solvers should yield identical solutions. We investigate the sensitivity of the FMDF solution on the most relevant numerical parameters, such as the number of particles per cell and the size of the ensemble domain. Next, we apply the FMDF modeling strategy to the simulation of chemically reacting, two- and three-dimensional temporally developing mixing layers and compare the results against both DNS and model-free LES. We clearly show that, when the turbulence/chemistry interaction is accounted for with the FMDF methodology, the results are in much better agreement to the DNS data. Finally, we perform two- and three-dimensional simulations of high Reynolds number, spatially developing, chemically reacting mixing layers, with the intent of reproducing a set of experimental results obtained at the California Institute of Technology. The mean temperature rise calculated by the hybrid FV/FMDF solver, which is associated with the amount of product formed, lies very close to the experimental profile. Conversely, when the effects of turbulence/chemistry coupling are ignored, the simulations clearly over predict the amount of product that is formed.

  19. Priming can affect naming colours using the study-test procedure. Revealing the role of task conflict.

    PubMed

    Sharma, Dinkar

    2016-11-14

    The Stroop paradigm has been widely used to study attention whilst its use to explore implicit memory have been mixed. Using the non-colour word Stroop task we tested contrasting predictions from the proactive-control/task-conflict model (Kalanthroff, Avnit, Henik, Davelaar & Usher, 2015) that implicate response conflict and task conflict for the priming effects. Using the study-test procedure 60 native English speakers were tested to determine whether priming effects from words that had previously been studied would cause interference when presented in a colour naming task. The results replicate a finding by MacLeod (1996) who showed no differences between the response latencies to studied and unstudied words. However, this pattern was predominately in the first half of the study where it was also found that both studied and unstudied words in a mixed block were slower to respond to than a block of pure unstudied words. The second half of the study showed stronger priming interference effects as well as a sequential modulation effect in which studied words slowed down the responses of studied words on the next trial. We discuss the role of proactive and reactive control processes and conclude that task conflict best explains the pattern of priming effects reported. Copyright © 2016. Published by Elsevier B.V.

  20. Regression analysis of mixed recurrent-event and panel-count data.

    PubMed

    Zhu, Liang; Tong, Xinwei; Sun, Jianguo; Chen, Manhua; Srivastava, Deo Kumar; Leisenring, Wendy; Robison, Leslie L

    2014-07-01

    In event history studies concerning recurrent events, two types of data have been extensively discussed. One is recurrent-event data (Cook and Lawless, 2007. The Analysis of Recurrent Event Data. New York: Springer), and the other is panel-count data (Zhao and others, 2010. Nonparametric inference based on panel-count data. Test 20: , 1-42). In the former case, all study subjects are monitored continuously; thus, complete information is available for the underlying recurrent-event processes of interest. In the latter case, study subjects are monitored periodically; thus, only incomplete information is available for the processes of interest. In reality, however, a third type of data could occur in which some study subjects are monitored continuously, but others are monitored periodically. When this occurs, we have mixed recurrent-event and panel-count data. This paper discusses regression analysis of such mixed data and presents two estimation procedures for the problem. One is a maximum likelihood estimation procedure, and the other is an estimating equation procedure. The asymptotic properties of both resulting estimators of regression parameters are established. Also, the methods are applied to a set of mixed recurrent-event and panel-count data that arose from a Childhood Cancer Survivor Study and motivated this investigation. © The Author 2014. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  1. CFD-based optimization in plastics extrusion

    NASA Astrophysics Data System (ADS)

    Eusterholz, Sebastian; Elgeti, Stefanie

    2018-05-01

    This paper presents novel ideas in numerical design of mixing elements in single-screw extruders. The actual design process is reformulated as a shape optimization problem, given some functional, but possibly inefficient initial design. Thereby automatic optimization can be incorporated and the design process is advanced, beyond the simulation-supported, but still experience-based approach. This paper proposes concepts to extend a method which has been developed and validated for die design to the design of mixing-elements. For simplicity, it focuses on single-phase flows only. The developed method conducts forward-simulations to predict the quasi-steady melt behavior in the relevant part of the extruder. The result of each simulation is used in a black-box optimization procedure based on an efficient low-order parameterization of the geometry. To minimize user interaction, an objective function is formulated that quantifies the products' quality based on the forward simulation. This paper covers two aspects: (1) It reviews the set-up of the optimization framework as discussed in [1], and (2) it details the necessary extensions for the optimization of mixing elements in single-screw extruders. It concludes with a presentation of first advances in the unsteady flow simulation of a metering and mixing section with the SSMUM [2] using the Carreau material model.

  2. Lagrangian particle statistics of numerically simulated shear waves

    NASA Astrophysics Data System (ADS)

    Kirby, J.; Briganti, R.; Brocchini, M.; Chen, Q. J.

    2006-12-01

    The properties of numerical solutions of various circulation models (Boussinesq-type and wave-averaged NLSWE) have been investigated on the basis of the induced horizontal flow mixing, for the case of shear waves. The mixing properties of the flow have been investigated using particle statistics, following the approach of LaCasce (2001) and Piattella et al. (2006). Both an idealized barred beach bathymetry and a test case taken from SANDYDUCK '97 have been considered. Random seeding patterns of passive tracer particles are used. The flow exhibits features similar to those discussed in literature. Differences are also evident due both to the physics (intense longshore shear shoreward of the bar) and the procedure used to obtain the statistics (lateral conditions limit the time/space window for the longshore flow). Within the Boussinesq framework, different formulations of Boussinesq type equations have been used and the results compared (Wei et al. 1995, Chen et al. (2003), Chen et al. (2006)). Analysis based on the Eulerian velocity fields suggests a close similarity between Wei et al. (1995) and Chen et. al (2006), while examination of particle displacements and implied mixing suggests a closer behaviour between Chen et al. (2003) and Chen et al. (2006). Two distinct stages of mixing are evident in all simulations: i) the first stage ends at t

  3. Study of B$$0\\atop{s}$$ Mixing at the D-Zero Detector at Fermilab Using the Semi-leptonic Decay B$$0\\atop{s}$$ → D$$-\\atop{s}$$ μ +v X

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anzelc, Meghan

    2008-06-01

    Bmore » $$0\\atop{s}$$ mixing studies provide a precision test of Charge-Parity violation in the Standard Model. A measurement of Δm s constrains elements of the CKM quark rotation matrix [1], providing a probe of Standard Model Charge-Parity violation. This thesis describes a study of $$0\\atop{s}$$ mixing in the semileptonic decay $$0\\atop{s}$$ → D s - μ +vX, where D s - → Φπ -, using data collected at the D-Zero detector at Fermi National Accelerator in atavia, Illinois. Approximately 2.8 fb -1 of data collected between April 2002 and August 2007 was used, covering the entirety of the Tevatron's RunIIa (April 2002 to March 2006) and part of RunIIb (March 2006-August 2007). Taggers using both opposite-side and same-side information were used to obtain the flavor information of the s 0 meson at production. The charge of the muon in the decay $$0\\atop{s}$$ → D s -μ +vX was used to determine the flavor of the $$0\\atop{s}$$ at decay. The $$d\\atop{0}$$ mixing frequency, Δm d, was measured to verify the analysis procedure. A log-likelihood calculation was performed, and a measurement of Δm s was obtained. The final result was Δm s = 18.86 ± 0.80(stat.) ± 0.37(sys.) with a significance of 2.6σ.« less

  4. 40 CFR 60.93 - Test methods and procedures.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 7 2013-07-01 2013-07-01 false Test methods and procedures. 60.93 Section 60.93 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Hot Mix Asphalt...

  5. 40 CFR 60.93 - Test methods and procedures.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 7 2012-07-01 2012-07-01 false Test methods and procedures. 60.93 Section 60.93 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Hot Mix Asphalt...

  6. 40 CFR 60.93 - Test methods and procedures.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 7 2014-07-01 2014-07-01 false Test methods and procedures. 60.93 Section 60.93 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Hot Mix Asphalt...

  7. 40 CFR 60.93 - Test methods and procedures.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 6 2011-07-01 2011-07-01 false Test methods and procedures. 60.93 Section 60.93 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) STANDARDS OF PERFORMANCE FOR NEW STATIONARY SOURCES Standards of Performance for Hot Mix Asphalt...

  8. Simulation of time-control procedures for terminal area flow management

    NASA Technical Reports Server (NTRS)

    Alcabin, M.; Erzberger, H.; Tobias, L.; Obrien, P. J.

    1985-01-01

    Simulations of a terminal area traffic-management system incorporating automated scheduling and time-control (four-dimensional) techniques conducted at NASA Ames Research Center jointly with the Federal Aviation Administration, have shown that efficient procedures can be developed for handling a mix of 4D-equipped and conventionally equipped aircraft. A crucial role in this system is played by an ATC host computer algorithm, referred to as a speed advisory, that allows controllers to maintain accurate time schedules of the conventionally equipped aircraft in the traffic mix. Results are of the most recent simulations in which two important special cases were investigated. First, the effects of a speed advisory on touchdown time scheduling are examined, when unequipped aircraft are constrained to follow fuel-optimized profiles in the near-terminal area, and rescheduling procedures are developed to handle missed approaches of 4D-equipped aircraft. Various performance measures, including controller opinion, are used to evaluate the effectiveness of the procedures.

  9. Modelling of subsonic COIL with an arbitrary magnetic modulation

    NASA Astrophysics Data System (ADS)

    Beránek, Jaroslav; Rohlena, Karel

    2007-05-01

    The concept of 1D subsonic COIL model with a mixing length was generalized to include the influence of a variable magnetic field on the stimulated emission cross-section. Equations describing the chemical kinetics were solved taking into account together with the gas temperature also a simplified mixing model of oxygen and iodine molecules. With the external time variable magnetic field the model is no longer stationary. A transformation in the system moving with the mixture reduces partial differential equations to ordinary equations in time with initial conditions given either by the stationary flow at the moment when the magnetic field is switched on combined with the boundary conditions at the injector. Advantage of this procedure is a possibility to consider an arbitrary temporal dependence of the imposed magnetic field and to calculate directly the response of the laser output. The method was applied to model the experimental data measured with the subsonic version of the COIL device in the Institute of Physics, Prague, where the applied magnetic field had a saw-tooth dependence. We found that various values characterizing the laser performance, such as the power density distribution over the active zone cross-section, may have a fairly complicated structure given by combined effects of the delayed reaction to the magnetic switching and the flow velocity. This is necessarily translated in a time dependent spatial inhomogeneity of output beam intensity profile.

  10. Heterotrophs are key contributors to nitrous oxide production in activated sludge under low C-to-N ratios during nitrification-Batch experiments and modeling.

    PubMed

    Domingo-Félez, Carlos; Pellicer-Nàcher, Carles; Petersen, Morten S; Jensen, Marlene M; Plósz, Benedek G; Smets, Barth F

    2017-01-01

    Nitrous oxide (N 2 O), a by-product of biological nitrogen removal during wastewater treatment, is produced by ammonia-oxidizing bacteria (AOB) and heterotrophic denitrifying bacteria (HB). Mathematical models are used to predict N 2 O emissions, often including AOB as the main N 2 O producer. Several model structures have been proposed without consensus calibration procedures. Here, we present a new experimental design that was used to calibrate AOB-driven N 2 O dynamics of a mixed culture. Even though AOB activity was favoured with respect to HB, oxygen uptake rates indicated HB activity. Hence, rigorous experimental design for calibration of autotrophic N 2 O production from mixed cultures is essential. The proposed N 2 O production pathways were examined using five alternative process models confronted with experimental data inferred. Individually, the autotrophic and heterotrophic denitrification pathway could describe the observed data. In the best-fit model, which combined two denitrification pathways, the heterotrophic was stronger than the autotrophic contribution to N 2 O production. Importantly, the individual contribution of autotrophic and heterotrophic to the total N 2 O pool could not be unambiguously elucidated solely based on bulk N 2 O measurements. Data on NO would increase the practical identifiability of N 2 O production pathways. Biotechnol. Bioeng. 2017;114: 132-140. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  11. Modular Engine Noise Component Prediction System (MCP) Technical Description and Assessment Document

    NASA Technical Reports Server (NTRS)

    Herkes, William H.; Reed, David H.

    2005-01-01

    This report describes an empirical prediction procedure for turbofan engine noise. The procedure generates predicted noise levels for several noise components, including inlet- and aft-radiated fan noise, and jet-mixing noise. This report discusses the noise source mechanisms, the development of the prediction procedures, and the assessment of the accuracy of these predictions. Finally, some recommendations for future work are presented.

  12. Markov chain sampling of the O(n) loop models on the infinite plane

    NASA Astrophysics Data System (ADS)

    Herdeiro, Victor

    2017-07-01

    A numerical method was recently proposed in Herdeiro and Doyon [Phys. Rev. E 94, 043322 (2016), 10.1103/PhysRevE.94.043322] showing a precise sampling of the infinite plane two-dimensional critical Ising model for finite lattice subsections. The present note extends the method to a larger class of models, namely the O(n) loop gas models for n ∈(1 ,2 ] . We argue that even though the Gibbs measure is nonlocal, it is factorizable on finite subsections when sufficient information on the loops touching the boundaries is stored. Our results attempt to show that provided an efficient Markov chain mixing algorithm and an improved discrete lattice dilation procedure the planar limit of the O(n) models can be numerically studied with efficiency similar to the Ising case. This confirms that scale invariance is the only requirement for the present numerical method to work.

  13. Medical and biohazardous waste generator`s guide: Revision 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1994-09-01

    This Guide describes the procedures required to comply with all federal and state laws and regulations and Lawrence Berkeley Laboratory (LBL) policy applicable to medical and biohazardous waste. The members of the LBL Biological Safety Subcommittee participated in writing these policies and procedures. The procedures and policies in this Guide apply to LBL personnel who work with infectious agents or potentially infectious agents, publicly perceived infectious items or materials (e.g., medical gloves, culture dishes), and sharps (e.g., needles, syringes, razor blades). If medical or biohazardous waste is contaminated or mixed with a hazardous chemical or material, with a radioactive material,more » or with both, the waste will be handled in accordance with the applicable federal and State of California laws and regulations for hazardous, radioactive, or mixed waste.« less

  14. Development of partial life-cycle experiments to assess the effects of endocrine disruptors on the freshwater gastropod Lymnaea stagnalis: a case-study with vinclozolin.

    PubMed

    Ducrot, Virginie; Teixeira-Alves, Mickaël; Lopes, Christelle; Delignette-Muller, Marie-Laure; Charles, Sandrine; Lagadic, Laurent

    2010-10-01

    Long-term effects of endocrine disruptors (EDs) on aquatic invertebrates remain difficult to assess, mainly due to the lack of appropriate sensitive toxicity test methods and relevant data analysis procedures. This study aimed at identifying windows of sensitivity to EDs along the life-cycle of the freshwater snail Lymnaea stagnalis, a candidate species for the development of forthcoming test guidelines. Juveniles, sub-adults, young adults and adults were exposed for 21 days to the fungicide vinclozolin (VZ). Survival, growth, onset of reproduction, fertility and fecundity were monitored weekly. Data were analyzed using standard statistical analysis procedures and mixed-effect models. No deleterious effect on survival and growth occurred in snails exposed to VZ at environmentally relevant concentrations. A significant impairment of the male function occurred in young adults, leading to infertility at concentrations exceeding 0.025 μg/L. Furthermore, fecundity was impaired in adults exposed to concentrations exceeding 25 μg/L. Biological responses depended on VZ concentration, exposure duration and on their interaction, leading to complex response patterns. The use of a standard statistical approach to analyze those data led to underestimation of VZ effects on reproduction, whereas effects could reliably be analyzed by mixed-effect models. L. stagnalis may be among the most sensitive invertebrate species to VZ, a 21-day reproduction test allowing the detection of deleterious effects at environmentally relevant concentrations of the fungicide. These results thus reinforce the relevance of L. stagnalis as a good candidate species for the development of guidelines devoted to the risk assessment of EDs.

  15. Evaluation of the Efficacy of Different Mixing Techniques and Disinfection on Microbial Colonization of Polyether Impression Materials: A Comparative Study.

    PubMed

    Singla, Youginder; Pachar, Renu B; Poriya, Sangeeta; Mishra, Aalok; Sharma, Rajni; Garg, Anshu

    2018-03-01

    This study aims to determine the role of mixing techniques of polyether impression materials and efficacy of disinfection on microbial colonization of these impression materials. Polyether impression material was mixed using two methods: First by hand mixing (group I) and second using an automixer (group II) with a total of 100 samples. Four microbial strains were studied, which included Escherichia coli, Staphylococcus aureus, Pseudomonas aeruginosa, and Candida albicans. After incubation, the bacterial colonies were counted, and then, disinfectant solution was applied. The effect of disinfection solution was evaluated for each specimen. The surface of polyether impression materials mixed with an automixer has less number of voids and overall a smoother surface as compared with the hand-mixed ones. On comparing the disinfection procedures, i.e., specimens without any disinfection and specimens after disinfection, statistically highly significant difference was seen between all the groups. We can conclude that impression mixing procedures are important in determining the surface characteristics of the impression and ultimately the colonization of bacteria and also determine the importance of disinfection on microbial colonization. This study emphasises the deleterious role of nosocomial infections and specific measures that should be taken regarding the prevention of such diseases. Dental impressions are proved to be a source of such infections and may lead to transmission of such diseases. Thus, proper measures should be taken right from the first step of impression taking to minimizing and preventing such kind of contaminations in clinical practice.

  16. Large eddy simulation model for wind-driven sea circulation in coastal areas

    NASA Astrophysics Data System (ADS)

    Petronio, A.; Roman, F.; Nasello, C.; Armenio, V.

    2013-12-01

    In the present paper a state-of-the-art large eddy simulation model (LES-COAST), suited for the analysis of water circulation and mixing in closed or semi-closed areas, is presented and applied to the study of the hydrodynamic characteristics of the Muggia bay, the industrial harbor of the city of Trieste, Italy. The model solves the non-hydrostatic, unsteady Navier-Stokes equations, under the Boussinesq approximation for temperature and salinity buoyancy effects, using a novel, two-eddy viscosity Smagorinsky model for the closure of the subgrid-scale momentum fluxes. The model employs: a simple and effective technique to take into account wind-stress inhomogeneity related to the blocking effect of emerged structures, which, in turn, can drive local-scale, short-term pollutant dispersion; a new nesting procedure to reconstruct instantaneous, turbulent velocity components, temperature and salinity at the open boundaries of the domain using data coming from large-scale circulation models (LCM). Validation tests have shown that the model reproduces field measurement satisfactorily. The analysis of water circulation and mixing in the Muggia bay has been carried out under three typical breeze conditions. Water circulation has been shown to behave as in typical semi-closed basins, with an upper layer moving along the wind direction (apart from the anti-cyclonic veering associated with the Coriolis force) and a bottom layer, thicker and slower than the upper one, moving along the opposite direction. The study has shown that water vertical mixing in the bay is inhibited by a large level of stable stratification, mainly associated with vertical variation in salinity and, to a minor extent, with temperature variation along the water column. More intense mixing, quantified by sub-critical values of the gradient Richardson number, is present in near-coastal regions where upwelling/downwelling phenomena occur. The analysis of instantaneous fields has detected the presence of large cross-sectional eddies spanning the whole water column and contributing to vertical mixing, associated with the presence of sub-surface horizontal turbulent structures. Analysis of water renewal within the bay shows that, under the typical breeze regimes considered in the study, the residence time of water in the bay is of the order of a few days. Finally, vertical eddy viscosity has been calculated and shown to vary by a couple of orders of magnitude along the water column, with larger values near the bottom surface where density stratification is smaller.

  17. Productivity growth in outpatient child and adolescent mental health services: the impact of case-mix adjustment.

    PubMed

    Halsteinli, Vidar; Kittelsen, Sverre A; Magnussen, Jon

    2010-02-01

    The performance of health service providers may be monitored by measuring productivity. However, the policy value of such measures may depend crucially on the accuracy of input and output measures. In particular, an important question is how to adjust adequately for case-mix in the production of health care. In this study, we assess productivity growth in Norwegian outpatient child and adolescent mental health service units (CAMHS) over a period characterized by governmental utilization of simple productivity indices, a substantial increase in capacity and a concurrent change in case-mix. We analyze the sensitivity of the productivity growth estimates using different specifications of output to adjust for case-mix differences. Case-mix adjustment is achieved by distributing patients into eight groups depending on reason for referral, age and gender, as well as correcting for the number of consultations. We utilize the nonparametric Data Envelopment Analysis (DEA) method to implicitly calculate weights that maximize each unit's efficiency. Malmquist indices of technical productivity growth are estimated and bootstrap procedures are performed to calculate confidence intervals and to test alternative specifications of outputs. The dataset consist of an unbalanced panel of 48-60 CAMHS in the period 1998-2006. The mean productivity growth estimate from a simple unadjusted patient model (one single output) is 35%; adjusting for case-mix (eight outputs) reduces the growth estimate to 15%. Adding consultations increases the estimate to 28%. The latter reflects an increase in number of consultations per patient. We find that the governmental productivity indices strongly tend to overestimate productivity growth. Case-mix adjustment is of major importance and governmental utilization of performance indicators necessitates careful considerations of output specifications. Copyright 2009 Elsevier Ltd. All rights reserved.

  18. An Assessment of Three Procedures to Teach Echoic Responding

    ERIC Educational Resources Information Center

    Cividini-Motta, Catia; Scharrer, Nicole; Ahearn, William H.

    2017-01-01

    The research literature has revealed mixed outcomes on various procedures for increasing vocalizations and echoic responding in persons with disabilities (Miguel, Carr, & Michael "The Analysis of Verbal Behavior," 18, 3-13, 2002; Stock, Schulze, & Mirenda "The Analysis of Verbal Behavior," 24, 123-133, 2008). We…

  19. Spatial scan statistics for detection of multiple clusters with arbitrary shapes.

    PubMed

    Lin, Pei-Sheng; Kung, Yi-Hung; Clayton, Murray

    2016-12-01

    In applying scan statistics for public health research, it would be valuable to develop a detection method for multiple clusters that accommodates spatial correlation and covariate effects in an integrated model. In this article, we connect the concepts of the likelihood ratio (LR) scan statistic and the quasi-likelihood (QL) scan statistic to provide a series of detection procedures sufficiently flexible to apply to clusters of arbitrary shape. First, we use an independent scan model for detection of clusters and then a variogram tool to examine the existence of spatial correlation and regional variation based on residuals of the independent scan model. When the estimate of regional variation is significantly different from zero, a mixed QL estimating equation is developed to estimate coefficients of geographic clusters and covariates. We use the Benjamini-Hochberg procedure (1995) to find a threshold for p-values to address the multiple testing problem. A quasi-deviance criterion is used to regroup the estimated clusters to find geographic clusters with arbitrary shapes. We conduct simulations to compare the performance of the proposed method with other scan statistics. For illustration, the method is applied to enterovirus data from Taiwan. © 2016, The International Biometric Society.

  20. Statistical inference methods for sparse biological time series data.

    PubMed

    Ndukum, Juliet; Fonseca, Luís L; Santos, Helena; Voit, Eberhard O; Datta, Susmita

    2011-04-25

    Comparing metabolic profiles under different biological perturbations has become a powerful approach to investigating the functioning of cells. The profiles can be taken as single snapshots of a system, but more information is gained if they are measured longitudinally over time. The results are short time series consisting of relatively sparse data that cannot be analyzed effectively with standard time series techniques, such as autocorrelation and frequency domain methods. In this work, we study longitudinal time series profiles of glucose consumption in the yeast Saccharomyces cerevisiae under different temperatures and preconditioning regimens, which we obtained with methods of in vivo nuclear magnetic resonance (NMR) spectroscopy. For the statistical analysis we first fit several nonlinear mixed effect regression models to the longitudinal profiles and then used an ANOVA likelihood ratio method in order to test for significant differences between the profiles. The proposed methods are capable of distinguishing metabolic time trends resulting from different treatments and associate significance levels to these differences. Among several nonlinear mixed-effects regression models tested, a three-parameter logistic function represents the data with highest accuracy. ANOVA and likelihood ratio tests suggest that there are significant differences between the glucose consumption rate profiles for cells that had been--or had not been--preconditioned by heat during growth. Furthermore, pair-wise t-tests reveal significant differences in the longitudinal profiles for glucose consumption rates between optimal conditions and heat stress, optimal and recovery conditions, and heat stress and recovery conditions (p-values <0.0001). We have developed a nonlinear mixed effects model that is appropriate for the analysis of sparse metabolic and physiological time profiles. The model permits sound statistical inference procedures, based on ANOVA likelihood ratio tests, for testing the significance of differences between short time course data under different biological perturbations.

  1. The effects of the sequential addition of synthesis parameters on the performance of alkali activated fly ash mortar

    NASA Astrophysics Data System (ADS)

    Dassekpo, Jean-Baptiste Mawulé; Zha, Xiaoxiong; Zhan, Jiapeng; Ning, Jiaqian

    Geopolymer is an energy efficient and sustainable material that is currently used in construction industry as an alternative for Portland cement. As a new material, specific mix design method is essential and efforts have been made to develop a mix design procedure with the main focus on achieving better compressive strength and economy. In this paper, a sequential addition of synthesis parameters such as fly ash-sand, alkaline liquids, plasticizer and additional water at well-defined time intervals was investigated. A total of 4 mix procedures were used to study the compressive performance on fly ash-based geopolymer mortar and the results of each method were analyzed and discussed. Experimental results show that the sequential addition of sodium hydroxide (NaOH), sodium silicate (Na2SiO3), plasticizer (PL), followed by adding water (WA) increases considerably the compressive strengths of the geopolymer-based mortar. These results clearly demonstrate the high significant influence of sequential addition of synthesis parameters on geopolymer materials compressive properties, and also provide a new mixing method for the preparation of geopolymer paste, mortar and concrete.

  2. Synergy and sustainability in rural procedural medicine: views from the coalface.

    PubMed

    Swayne, Andrew; Eley, Diann S

    2010-02-01

    The practice of rural and remote medicine in Australia entails many challenges, including a broad casemix and the remoteness of specialist support. Many rural practitioners employ advanced procedural skills in anaesthetics, surgery, obstetrics and emergency medicine, but the use of these skills has been declining over the last 20 years. This study explored the perceptions of rural general practitioners (GPs) on the current and future situation of procedural medicine. The qualitative results of data from a mixed-method design are reported. Free-response survey comments and semistructured interview transcripts were analysed by a framework analysis for major themes. General practices in rural and remote Queensland. Rural GPs in Rural and Remote Metropolitan Classification 4-7 areas of Queensland. The perceptions of rural GPs on the current and future situation of rural procedural medicine. Major concerns from the survey focused on closure of facilities and downgrading of services, cost and time to keep up skills, increasing litigation issues and changing attitudes of the public. Interviews designed to draw out solutions to help rectify the perceived circumstances highlighted two major themes: 'synergy' between the support from medical teams and community in ensuring 'sustainability' of services. This article presents a model of rural procedural practice where synergy between staff, resources and support networks represents the optimal way to deliver a non-metropolitan procedural service. The findings serve to remind educators and policy-makers that future planning for sustainability of rural procedural services must be broad-based and comprehensive.

  3. State space approach to mixed boundary value problems.

    NASA Technical Reports Server (NTRS)

    Chen, C. F.; Chen, M. M.

    1973-01-01

    A state-space procedure for the formulation and solution of mixed boundary value problems is established. This procedure is a natural extension of the method used in initial value problems; however, certain special theorems and rules must be developed. The scope of the applications of the approach includes beam, arch, and axisymmetric shell problems in structural analysis, boundary layer problems in fluid mechanics, and eigenvalue problems for deformable bodies. Many classical methods in these fields developed by Holzer, Prohl, Myklestad, Thomson, Love-Meissner, and others can be either simplified or unified under new light shed by the state-variable approach. A beam problem is included as an illustration.

  4. The mixed-mode bending method for delamination testing

    NASA Technical Reports Server (NTRS)

    Reeder, James R.; Crews, John H., Jr.

    1989-01-01

    A mixed-mode bending (MMB) test procedure is presented which combines double cantilever beam mode-I loading and end-notch flexure mode II loading on a split, unidirectional laminate. The MMB test has been analyzed by FEM and by beam theory in order to ascertain the mode I and mode II components' respective strain energy release rates, G(I) and G(II); these analyses indicate that a wide range of G(I)/G(II) ratios can be generated by varying the applied load's position on the loading lever. The MMB specimen analysis and test procedures are demonstrated for the case of AS4/PEEK unidirectional laminates.

  5. MixSIAR: advanced stable isotope mixing models in R

    EPA Science Inventory

    Background/Question/Methods The development of stable isotope mixing models has coincided with modeling products (e.g. IsoSource, MixSIR, SIAR), where methodological advances are published in parity with software packages. However, while mixing model theory has recently been ex...

  6. Including mixed methods research in systematic reviews: Examples from qualitative syntheses in TB and malaria control

    PubMed Central

    2012-01-01

    Background Health policy makers now have access to a greater number and variety of systematic reviews to inform different stages in the policy making process, including reviews of qualitative research. The inclusion of mixed methods studies in systematic reviews is increasing, but these studies pose particular challenges to methods of review. This article examines the quality of the reporting of mixed methods and qualitative-only studies. Methods We used two completed systematic reviews to generate a sample of qualitative studies and mixed method studies in order to make an assessment of how the quality of reporting and rigor of qualitative-only studies compares with that of mixed-methods studies. Results Overall, the reporting of qualitative studies in our sample was consistently better when compared with the reporting of mixed methods studies. We found that mixed methods studies are less likely to provide a description of the research conduct or qualitative data analysis procedures and less likely to be judged credible or provide rich data and thick description compared with standalone qualitative studies. Our time-related analysis shows that for both types of study, papers published since 2003 are more likely to report on the study context, describe analysis procedures, and be judged credible and provide rich data. However, the reporting of other aspects of research conduct (i.e. descriptions of the research question, the sampling strategy, and data collection methods) in mixed methods studies does not appear to have improved over time. Conclusions Mixed methods research makes an important contribution to health research in general, and could make a more substantial contribution to systematic reviews. Through our careful analysis of the quality of reporting of mixed methods and qualitative-only research, we have identified areas that deserve more attention in the conduct and reporting of mixed methods research. PMID:22545681

  7. A novel material detection algorithm based on 2D GMM-based power density function and image detail addition scheme in dual energy X-ray images.

    PubMed

    Pourghassem, Hossein

    2012-01-01

    Material detection is a vital need in dual energy X-ray luggage inspection systems at security of airport and strategic places. In this paper, a novel material detection algorithm based on statistical trainable models using 2-Dimensional power density function (PDF) of three material categories in dual energy X-ray images is proposed. In this algorithm, the PDF of each material category as a statistical model is estimated from transmission measurement values of low and high energy X-ray images by Gaussian Mixture Models (GMM). Material label of each pixel of object is determined based on dependency probability of its transmission measurement values in the low and high energy to PDF of three material categories (metallic, organic and mixed materials). The performance of material detection algorithm is improved by a maximum voting scheme in a neighborhood of image as a post-processing stage. Using two background removing and denoising stages, high and low energy X-ray images are enhanced as a pre-processing procedure. For improving the discrimination capability of the proposed material detection algorithm, the details of the low and high energy X-ray images are added to constructed color image which includes three colors (orange, blue and green) for representing the organic, metallic and mixed materials. The proposed algorithm is evaluated on real images that had been captured from a commercial dual energy X-ray luggage inspection system. The obtained results show that the proposed algorithm is effective and operative in detection of the metallic, organic and mixed materials with acceptable accuracy.

  8. In silico mapping of quantitative trait loci in maize.

    PubMed

    Parisseaux, B; Bernardo, R

    2004-08-01

    Quantitative trait loci (QTL) are most often detected through designed mapping experiments. An alternative approach is in silico mapping, whereby genes are detected using existing phenotypic and genomic databases. We explored the usefulness of in silico mapping via a mixed-model approach in maize (Zea mays L.). Specifically, our objective was to determine if the procedure gave results that were repeatable across populations. Multilocation data were obtained from the 1995-2002 hybrid testing program of Limagrain Genetics in Europe. Nine heterotic patterns comprised 22,774 single crosses. These single crosses were made from 1,266 inbreds that had data for 96 simple sequence repeat (SSR) markers. By a mixed-model approach, we estimated the general combining ability effects associated with marker alleles in each heterotic pattern. The numbers of marker loci with significant effects--37 for plant height, 24 for smut [Ustilago maydis (DC.) Cda.] resistance, and 44 for grain moisture--were consistent with previous results from designed mapping experiments. Each trait had many loci with small effects and few loci with large effects. For smut resistance, a marker in bin 8.05 on chromosome 8 had a significant effect in seven (out of a maximum of 18) instances. For this major QTL, the maximum effect of an allele substitution ranged from 5.4% to 41.9%, with an average of 22.0%. We conclude that in silico mapping via a mixed-model approach can detect associations that are repeatable across different populations. We speculate that in silico mapping will be more useful for gene discovery than for selection in plant breeding programs. Copyright 2004 Springer-Verlag

  9. Masonry Procedures. Building Maintenance. Module V. Instructor's Guide.

    ERIC Educational Resources Information Center

    Eck, Francis

    This curriculum guide, one of six modules keyed to the building maintenance competency profile developed by industry and education professionals, provides materials for a masonry procedures unit containing eight lessons. Lesson topics are masonry safety practices; set forms; mix concrete; patch and/or repair concrete; pour and finish concrete; mix…

  10. Effects of vigorous mixing of blood vacuum tubes on laboratory test results.

    PubMed

    Lima-Oliveira, Gabriel; Lippi, Giuseppe; Salvagno, Gian Luca; Montagnana, Martina; Gelati, Matteo; Volanski, Waldemar; Boritiza, Katia Cristina; Picheth, Geraldo; Guidi, Gian Cesare

    2013-02-01

    To evaluate the effect of tubes mixing (gentle vs. vigorous) on diagnostic blood specimens collected in vacuum tube systems by venipuncture. Blood was collected for routine coagulation, immunochemistry and hematological testing from one hundred volunteers into six vacuum tubes: two 3.6 mL vacuum tubes containing 0.4 mL of buffered sodium citrate (9NC) 0.109 mol/L: 3.2 W/V%; two 3.5 mL vacuum tubes with clot activator and gel separator; and two 3.0 mL vacuum tubes containing 5.9 mg K(2)EDTA (Terumo Europe, Belgium). Immediately after the venipuncture all vacuum tubes (each of one additive type) were processed through two different procedures: i) Standard: blood specimens in K(2)EDTA- or sodium citrate-vacuum tubes were gently inverted five times whereas the specimens in tubes with clot activator and gel separator were gently inverted ten times, as recommended by the manufacturer; ii) Vigorous mix: all blood specimens were shaken up vigorously during 3-5s independently of the additive type inside the tubes. The significance of the differences between samples was assessed by Student's t-test or Wilcoxon ranked-pairs test after checking for normality. The level of statistical significance was set at P<0.05. No significant difference (P<0.05) was detected between the procedures for all tested parameters. Surprisingly only a visual alteration (presence of foam on the top) was shown by all the tubes mixed vigorously before centrifugation (Fig. 1 A, B and C). Moreover the serum tubes from vigorous mixing procedure shows a "blood ring" on the tube top after stopper removal (Fig. 1 D). Our results drop out a paradigm suggesting that the incorrect primary blood tubes mixing promotes laboratory variability. We suggest that similar evaluation should be done using other brands of vacuum tubes by each laboratory manager. Copyright © 2012 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  11. Modeling of correlated data with informative cluster sizes: An evaluation of joint modeling and within-cluster resampling approaches.

    PubMed

    Zhang, Bo; Liu, Wei; Zhang, Zhiwei; Qu, Yanping; Chen, Zhen; Albert, Paul S

    2017-08-01

    Joint modeling and within-cluster resampling are two approaches that are used for analyzing correlated data with informative cluster sizes. Motivated by a developmental toxicity study, we examined the performances and validity of these two approaches in testing covariate effects in generalized linear mixed-effects models. We show that the joint modeling approach is robust to the misspecification of cluster size models in terms of Type I and Type II errors when the corresponding covariates are not included in the random effects structure; otherwise, statistical tests may be affected. We also evaluate the performance of the within-cluster resampling procedure and thoroughly investigate the validity of it in modeling correlated data with informative cluster sizes. We show that within-cluster resampling is a valid alternative to joint modeling for cluster-specific covariates, but it is invalid for time-dependent covariates. The two methods are applied to a developmental toxicity study that investigated the effect of exposure to diethylene glycol dimethyl ether.

  12. Computer program for solving laminar, transitional, or turbulent compressible boundary-layer equations for two-dimensional and axisymmetric flow

    NASA Technical Reports Server (NTRS)

    Harris, J. E.; Blanchard, D. K.

    1982-01-01

    A numerical algorithm and computer program are presented for solving the laminar, transitional, or turbulent two dimensional or axisymmetric compressible boundary-layer equations for perfect-gas flows. The governing equations are solved by an iterative three-point implicit finite-difference procedure. The software, program VGBLP, is a modification of the approach presented in NASA TR R-368 and NASA TM X-2458, respectively. The major modifications are: (1) replacement of the fourth-order Runge-Kutta integration technique with a finite-difference procedure for numerically solving the equations required to initiate the parabolic marching procedure; (2) introduction of the Blottner variable-grid scheme; (3) implementation of an iteration scheme allowing the coupled system of equations to be converged to a specified accuracy level; and (4) inclusion of an iteration scheme for variable-entropy calculations. These modifications to the approach presented in NASA TR R-368 and NASA TM X-2458 yield a software package with high computational efficiency and flexibility. Turbulence-closure options include either two-layer eddy-viscosity or mixing-length models. Eddy conductivity is modeled as a function of eddy viscosity through a static turbulent Prandtl number formulation. Several options are provided for specifying the static turbulent Prandtl number. The transitional boundary layer is treated through a streamwise intermittency function which modifies the turbulence-closure model. This model is based on the probability distribution of turbulent spots and ranges from zero to unity for laminar and turbulent flow, respectively. Several test cases are presented as guides for potential users of the software.

  13. On the Development of an Efficient Parallel Hybrid Solver with Application to Acoustically Treated Aero-Engine Nacelles

    NASA Technical Reports Server (NTRS)

    Watson, Willie R.; Nark, Douglas M.; Nguyen, Duc T.; Tungkahotara, Siroj

    2006-01-01

    A finite element solution to the convected Helmholtz equation in a nonuniform flow is used to model the noise field within 3-D acoustically treated aero-engine nacelles. Options to select linear or cubic Hermite polynomial basis functions and isoparametric elements are included. However, the key feature of the method is a domain decomposition procedure that is based upon the inter-mixing of an iterative and a direct solve strategy for solving the discrete finite element equations. This procedure is optimized to take full advantage of sparsity and exploit the increased memory and parallel processing capability of modern computer architectures. Example computations are presented for the Langley Flow Impedance Test facility and a rectangular mapping of a full scale, generic aero-engine nacelle. The accuracy and parallel performance of this new solver are tested on both model problems using a supercomputer that contains hundreds of central processing units. Results show that the method gives extremely accurate attenuation predictions, achieves super-linear speedup over hundreds of CPUs, and solves upward of 25 million complex equations in a quarter of an hour.

  14. Three-Dimensional Navier-Stokes Method with Two-Equation Turbulence Models for Efficient Numerical Simulation of Hypersonic Flows

    NASA Technical Reports Server (NTRS)

    Bardina, J. E.

    1994-01-01

    A new computational efficient 3-D compressible Reynolds-averaged implicit Navier-Stokes method with advanced two equation turbulence models for high speed flows is presented. All convective terms are modeled using an entropy satisfying higher-order Total Variation Diminishing (TVD) scheme based on implicit upwind flux-difference split approximations and arithmetic averaging procedure of primitive variables. This method combines the best features of data management and computational efficiency of space marching procedures with the generality and stability of time dependent Navier-Stokes procedures to solve flows with mixed supersonic and subsonic zones, including streamwise separated flows. Its robust stability derives from a combination of conservative implicit upwind flux-difference splitting with Roe's property U to provide accurate shock capturing capability that non-conservative schemes do not guarantee, alternating symmetric Gauss-Seidel 'method of planes' relaxation procedure coupled with a three-dimensional two-factor diagonal-dominant approximate factorization scheme, TVD flux limiters of higher-order flux differences satisfying realizability, and well-posed characteristic-based implicit boundary-point a'pproximations consistent with the local characteristics domain of dependence. The efficiency of the method is highly increased with Newton Raphson acceleration which allows convergence in essentially one forward sweep for supersonic flows. The method is verified by comparing with experiment and other Navier-Stokes methods. Here, results of adiabatic and cooled flat plate flows, compression corner flow, and 3-D hypersonic shock-wave/turbulent boundary layer interaction flows are presented. The robust 3-D method achieves a better computational efficiency of at least one order of magnitude over the CNS Navier-Stokes code. It provides cost-effective aerodynamic predictions in agreement with experiment, and the capability of predicting complex flow structures in complex geometries with good accuracy.

  15. Analysis of free turbulent shear flows by numerical methods

    NASA Technical Reports Server (NTRS)

    Korst, H. H.; Chow, W. L.; Hurt, R. F.; White, R. A.; Addy, A. L.

    1973-01-01

    Studies are described in which the effort was essentially directed to classes of problems where the phenomenologically interpreted effective transport coefficients could be absorbed by, and subsequently extracted from (by comparison with experimental data), appropriate coordinate transformations. The transformed system of differential equations could then be solved without further specifications or assumptions by numerical integration procedures. An attempt was made to delineate different regimes for which specific eddy viscosity models could be formulated. In particular, this would account for the carryover of turbulence from attached boundary layers, the transitory adjustment, and the asymptotic behavior of initially disturbed mixing regions. Such models were subsequently used in seeking solutions for the prescribed two-dimensional test cases, yielding a better insight into overall aspects of the exchange mechanisms.

  16. A laboratory investigation of mixing dynamics between biofuels and surface waters

    NASA Astrophysics Data System (ADS)

    Wang, Xiaoxiang; Cotel, Aline

    2017-11-01

    Recently, production and usage of ethanol-blend fuels or biofuels have increased dramatically along with increasing risk of spilling into surface waters. Lack of understanding of the environmental impacts and absence of standard clean-up procedures make it crucial to study the mixing behavior between biofuels and water. Biofuels are represented by a solution of ethanol and glycol. A Plexiglas tank in conjunction with a wave generator is used to simulate the mixing of surface waters and biofuels under different natural conditions. In our previous experiments, two distinct mixing regimes were observed. One regime was driven by turbulence and the other by interfacial instabilities. However, under more realistic situations, without wind driven waves, only the first mixing regime was found. After one minute of rapid turbulent mixing, biofuels and water were fully mixed and no interface was formed. During the mixing process, chemical reactions happened simultaneously and influenced mixing dynamics. Current experiments are investigating the effect of waves on the mixing dynamics. Support from NSF CBET 1335878.

  17. Minnesota dentists׳ attitudes toward the dental therapist workforce model.

    PubMed

    Blue, Christine M; Rockwood, Todd; Riggs, Sheila

    2015-06-01

    The purpose of this study was to evaluate dentists' attitudes and perceptions toward dental therapists, a new licensed dental provider in Minnesota. This study employed mixed modes to administer a survey using a stratified random sample of 1000 dentists in Minnesota. The response rate was 55% (AAPOR RR1: n=551/999). Results showed a majority of dentists were opposed to dental therapists performing irreversible procedures. In addition, results identified perceived barriers to hiring a dental therapist and found dentists do not believe dental therapists will alleviate oral health disparity in the State. Published by Elsevier Inc.

  18. A k-permutation algorithm for Fixed Satellite Service orbital allotments

    NASA Technical Reports Server (NTRS)

    Reilly, Charles H.; Mount-Campbell, Clark A.; Gonsalvez, David J. A.

    1988-01-01

    A satellite system synthesis problem, the satellite location problem (SLP), is addressed in this paper. In SLP, orbital locations (longitudes) are allotted to geostationary satellites in the Fixed Satellite Service. A linear mixed-integer programming model is presented that views SLP as a combination of two problems: (1) the problem of ordering the satellites and (2) the problem of locating the satellites given some ordering. A special-purpose heuristic procedure, a k-permutation algorithm, that has been developed to find solutions to SLPs formulated in the manner suggested is described. Solutions to small example problems are presented and analyzed.

  19. Advantages from Mixed Storage of Ammunition

    DTIC Science & Technology

    1983-07-01

    A^dN^MM ADy^->6y 7<^ TECHNICAL REPORT ARBRL-TR-02506 ADVANTAGES FROM MIXED STORAGE OF AMMUNITION Ona R. Lyman July 1983 US ARMY ARMAMENT...that "weigh out." Benefits can be derived from mixing an ammunition that "weighs out" with ammunition that "cubes out." In principle, it is...accessible. Below is listed a step-by-step procedure for determining benefits to be derived. Step 1. Select a munition that "weighs out" and note if more

  20. A unified procedure for meta-analytic evaluation of surrogate end points in randomized clinical trials

    PubMed Central

    Dai, James Y.; Hughes, James P.

    2012-01-01

    The meta-analytic approach to evaluating surrogate end points assesses the predictiveness of treatment effect on the surrogate toward treatment effect on the clinical end point based on multiple clinical trials. Definition and estimation of the correlation of treatment effects were developed in linear mixed models and later extended to binary or failure time outcomes on a case-by-case basis. In a general regression setting that covers nonnormal outcomes, we discuss in this paper several metrics that are useful in the meta-analytic evaluation of surrogacy. We propose a unified 3-step procedure to assess these metrics in settings with binary end points, time-to-event outcomes, or repeated measures. First, the joint distribution of estimated treatment effects is ascertained by an estimating equation approach; second, the restricted maximum likelihood method is used to estimate the means and the variance components of the random treatment effects; finally, confidence intervals are constructed by a parametric bootstrap procedure. The proposed method is evaluated by simulations and applications to 2 clinical trials. PMID:22394448

  1. Fuzzy Classification of Ocean Color Satellite Data for Bio-optical Algorithm Constituent Retrievals

    NASA Technical Reports Server (NTRS)

    Campbell, Janet W.

    1998-01-01

    The ocean has been traditionally viewed as a 2 class system. Morel and Prieur (1977) classified ocean water according to the dominant absorbent particle suspended in the water column. Case 1 is described as having a high concentration of phytoplankton (and detritus) relative to other particles. Conversely, case 2 is described as having inorganic particles such as suspended sediments in high concentrations. Little work has gone into the problem of mixing bio-optical models for these different water types. An approach is put forth here to blend bio-optical algorithms based on a fuzzy classification scheme. This scheme involves two procedures. First, a clustering procedure identifies classes and builds class statistics from in-situ optical measurements. Next, a classification procedure assigns satellite pixels partial memberships to these classes based on their ocean color reflectance signature. These membership assignments can be used as the basis for a weighting retrievals from class-specific bio-optical algorithms. This technique is demonstrated with in-situ optical measurements and an image from the SeaWiFS ocean color satellite.

  2. Numerical modeling of thermal regime in inland water bodies with field measurement data

    NASA Astrophysics Data System (ADS)

    Gladskikh, D.; Sergeev, D.; Baydakov, G.; Soustova, I.; Troitskaya, Yu.

    2018-01-01

    Modification of the program complex LAKE, which is intended to compute the thermal regimes of inland water bodies, and the results of its validation in accordance with the parameters of lake part of Gorky water reservoir are reviewed in the research. The modification caused changing the procedure of input temperature profile assignment and parameterization of surface stress on air-water boundary in accordance with the consideration of wind influence on mixing process. Also the innovation consists in combined methods of gathering meteorological parameters from files of global meteorological reanalysis and data of hydrometeorological station. Temperature profiles carried out with CTD-probe during expeditions in the period 2014-2017 were used for validation of the model. The comparison between the real data and the numerical results and its assessment based on time and temperature dependences in control points, correspondence of the forms of the profiles and standard deviation for all performed realizations are provided. It is demonstrated that the model reproduces the results of field measurement data for all observed conditions and seasons. The numerical results for the regimes with strong mixing are in the best quantitative and qualitative agreement with the real profiles. The accuracy of the forecast for the ones with strong stratification near the surface is lower but all specificities of the forms are correctly reproduced.

  3. Development of Benchmark Examples for Delamination Onset and Fatigue Growth Prediction

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald

    2011-01-01

    An approach for assessing the delamination propagation and growth capabilities in commercial finite element codes was developed and demonstrated for the Virtual Crack Closure Technique (VCCT) implementations in ABAQUS. The Double Cantilever Beam (DCB) specimen was chosen as an example. First, benchmark results to assess delamination propagation capabilities under static loading were created using models simulating specimens with different delamination lengths. For each delamination length modeled, the load and displacement at the load point were monitored. The mixed-mode strain energy release rate components were calculated along the delamination front across the width of the specimen. A failure index was calculated by correlating the results with the mixed-mode failure criterion of the graphite/epoxy material. The calculated critical loads and critical displacements for delamination onset for each delamination length modeled were used as a benchmark. The load/displacement relationship computed during automatic propagation should closely match the benchmark case. Second, starting from an initially straight front, the delamination was allowed to propagate based on the algorithms implemented in the commercial finite element software. The load-displacement relationship obtained from the propagation analysis results and the benchmark results were compared. Good agreements could be achieved by selecting the appropriate input parameters, which were determined in an iterative procedure.

  4. Numerical techniques for the solution of the compressible Navier-Stokes equations and implementation of turbulence models. [separated turbulent boundary layer flow problems

    NASA Technical Reports Server (NTRS)

    Baldwin, B. S.; Maccormack, R. W.; Deiwert, G. S.

    1975-01-01

    The time-splitting explicit numerical method of MacCormack is applied to separated turbulent boundary layer flow problems. Modifications of this basic method are developed to counter difficulties associated with complicated geometry and severe numerical resolution requirements of turbulence model equations. The accuracy of solutions is investigated by comparison with exact solutions for several simple cases. Procedures are developed for modifying the basic method to improve the accuracy. Numerical solutions of high-Reynolds-number separated flows over an airfoil and shock-separated flows over a flat plate are obtained. A simple mixing length model of turbulence is used for the transonic flow past an airfoil. A nonorthogonal mesh of arbitrary configuration facilitates the description of the flow field. For the simpler geometry associated with the flat plate, a rectangular mesh is used, and solutions are obtained based on a two-equation differential model of turbulence.

  5. Bayesian function-on-function regression for multilevel functional data.

    PubMed

    Meyer, Mark J; Coull, Brent A; Versace, Francesco; Cinciripini, Paul; Morris, Jeffrey S

    2015-09-01

    Medical and public health research increasingly involves the collection of complex and high dimensional data. In particular, functional data-where the unit of observation is a curve or set of curves that are finely sampled over a grid-is frequently obtained. Moreover, researchers often sample multiple curves per person resulting in repeated functional measures. A common question is how to analyze the relationship between two functional variables. We propose a general function-on-function regression model for repeatedly sampled functional data on a fine grid, presenting a simple model as well as a more extensive mixed model framework, and introducing various functional Bayesian inferential procedures that account for multiple testing. We examine these models via simulation and a data analysis with data from a study that used event-related potentials to examine how the brain processes various types of images. © 2015, The International Biometric Society.

  6. A computational model for the prediction of jet entrainment in the vicinity of nozzle boattails (the BOAT code)

    NASA Technical Reports Server (NTRS)

    Dash, S. M.; Pergament, H. S.

    1978-01-01

    The development of a computational model (BOAT) for calculating nearfield jet entrainment, and its incorporation in an existing methodology for the prediction of nozzle boattail pressures, is discussed. The model accounts for the detailed turbulence and thermochemical processes occurring in the mixing layer formed between a jet exhaust and surrounding external stream while interfacing with the inviscid exhaust and external flowfield regions in an overlaid, interactive manner. The ability of the BOAT model to analyze simple free shear flows is assessed by comparisons with fundamental laboratory data. The overlaid procedure for incorporating variable pressures into BOAT and the entrainment correction employed to yield an effective plume boundary for the inviscid external flow are demonstrated. This is accomplished via application of BOAT in conjunction with the codes comprising the NASA/LRC patched viscous/inviscid methodology for determining nozzle boattail drag for subsonic/transonic external flows.

  7. A Comparative Evaluation of the Linear Dimensional Accuracy of Four Impression Techniques using Polyether Impression Material.

    PubMed

    Manoj, Smita Sara; Cherian, K P; Chitre, Vidya; Aras, Meena

    2013-12-01

    There is much discussion in the dental literature regarding the superiority of one impression technique over the other using addition silicone impression material. However, there is inadequate information available on the accuracy of different impression techniques using polyether. The purpose of this study was to assess the linear dimensional accuracy of four impression techniques using polyether on a laboratory model that simulates clinical practice. The impression material used was Impregum Soft™, 3 M ESPE and the four impression techniques used were (1) Monophase impression technique using medium body impression material. (2) One step double mix impression technique using heavy body and light body impression materials simultaneously. (3) Two step double mix impression technique using a cellophane spacer (heavy body material used as a preliminary impression to create a wash space with a cellophane spacer, followed by the use of light body material). (4) Matrix impression using a matrix of polyether occlusal registration material. The matrix is loaded with heavy body material followed by a pick-up impression in medium body material. For each technique, thirty impressions were made of a stainless steel master model that contained three complete crown abutment preparations, which were used as the positive control. Accuracy was assessed by measuring eight dimensions (mesiodistal, faciolingual and inter-abutment) on stone dies poured from impressions of the master model. A two-tailed t test was carried out to test the significance in difference of the distances between the master model and the stone models. One way analysis of variance (ANOVA) was used for multiple group comparison followed by the Bonferroni's test for pair wise comparison. The accuracy was tested at α = 0.05. In general, polyether impression material produced stone dies that were smaller except for the dies produced from the one step double mix impression technique. The ANOVA revealed a highly significant difference for each dimension measured (except for the inter-abutment distance between the first and the second die) between any two groups of stone models obtained from the four impression techniques. Pair wise comparison for each measurement did not reveal any significant difference (except for the faciolingual distance of the third die) between the casts produced using the two step double mix impression technique and the matrix impression system. The two step double mix impression technique produced stone dies that showed the least dimensional variation. During fabrication of a cast restoration, laboratory procedures should not only compensate for the cement thickness, but also for the increase or decrease in die dimensions.

  8. Discharges with surgical procedures performed less often than once per month per hospital account for two-thirds of hospital costs of inpatient surgery.

    PubMed

    O'Neill, Liam; Dexter, Franklin; Park, Sae-Hwan; Epstein, Richard H

    2017-09-01

    Most surgical discharges (54%) at the average hospital are for procedures performed no more often than once per month at that hospital. We hypothesized that such uncommon procedures would be associated with an even greater percentage of the total cost of performing all surgical procedures at that hospital. Observational study. State of Texas hospital discharge abstract data: 4th quarter of 2015 and 1st quarter of 2016. Inpatients discharged with a major therapeutic ("operative") procedure. For each of N=343 hospitals, counts of discharges, sums of lengths of stay (LOS), sums of diagnosis related group (DRG) case-mix weights, and sums of charges were obtained for each procedure or combination of procedures, classified by International Classification of Diseases version 10 Procedure Coding System (ICD-10-PCS). Each discharge was classified into 2 categories, uncommon versus not, defined as a procedure performed at most once per month versus those performed more often than once per month. Major procedures performed at most once per month per hospital accounted for an average among hospitals of 68% of the total inpatient costs associated with all major therapeutic procedures. On average, the percentage of total costs associated with uncommon procedures was 26% greater than expected based on their share of total discharges (P<0.00001). Average percentage differences were insensitive to the endpoint, with similar results for the percentage of patient days and percentage of DRG case-mix weights. Approximately 2/3rd (mean 68%) of inpatient costs among surgical patients can be attributed to procedures performed at most once per month per hospital. The finding that such uncommon procedures account for a large percentage of costs is important because methods of cost accounting by procedure are generally unsuitable for them. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Differentiated Financing of Schools in French-Speaking Belgium: Prospectives for Regulating a School Quasi-Market

    ERIC Educational Resources Information Center

    Demeuse, Marc; Derobertmasure, Antoine; Friant, Nathanael

    2010-01-01

    The school quasi-market in French-speaking Belgium is characterised by segregation. Efforts to apply measures that encourage greater social mixing have met with stiff resistance. In 2008 and 2009, turbulence was caused by the application of the "social mixing" law influencing the registration procedures. The purpose of this article is to…

  10. A New Item Selection Procedure for Mixed Item Type in Computerized Classification Testing.

    ERIC Educational Resources Information Center

    Lau, C. Allen; Wang, Tianyou

    This paper proposes a new Information-Time index as the basis for item selection in computerized classification testing (CCT) and investigates how this new item selection algorithm can help improve test efficiency for item pools with mixed item types. It also investigates how practical constraints such as item exposure rate control, test…

  11. The Virginia method of determining the cement content of freshly mixed cement-soil mixtures : a manual prepared for the use of the Virginia Dept. of Highways.

    DOT National Transportation Integrated Search

    1971-01-01

    This manual describes a new method developed by the author, based on ASTM Method D2901, for determining the cement content of freshly mixed soil cement. The manual contains information on apparatus, reagents, procedures, source of equipment and reage...

  12. Quantitative Determination of Citric and Ascorbic Acid in Powdered Drink Mixes

    ERIC Educational Resources Information Center

    Sigmann, Samuella B.; Wheeler, Dale E.

    2004-01-01

    A procedure by which the reactions are used to quantitatively determine the amount of total acid, the amount of total ascorbic acid and the amount of citric acid in a given sample of powdered drink mix, are described. A safe, reliable and low-cost quantitative method to analyze consumer product for acid content is provided.

  13. Influence of Finite Element Software on Energy Release Rates Computed Using the Virtual Crack Closure Technique

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald; Goetze, Dirk; Ransom, Jonathon (Technical Monitor)

    2006-01-01

    Strain energy release rates were computed along straight delamination fronts of Double Cantilever Beam, End-Notched Flexure and Single Leg Bending specimens using the Virtual Crack Closure Technique (VCCT). Th e results were based on finite element analyses using ABAQUS# and ANSYS# and were calculated from the finite element results using the same post-processing routine to assure a consistent procedure. Mixed-mode strain energy release rates obtained from post-processing finite elem ent results were in good agreement for all element types used and all specimens modeled. Compared to previous studies, the models made of s olid twenty-node hexahedral elements and solid eight-node incompatible mode elements yielded excellent results. For both codes, models made of standard brick elements and elements with reduced integration did not correctly capture the distribution of the energy release rate acr oss the width of the specimens for the models chosen. The results suggested that element types with similar formulation yield matching results independent of the finite element software used. For comparison, m ixed-mode strain energy release rates were also calculated within ABAQUS#/Standard using the VCCT for ABAQUS# add on. For all specimens mod eled, mixed-mode strain energy release rates obtained from ABAQUS# finite element results using post-processing were almost identical to re sults calculated using the VCCT for ABAQUS# add on.

  14. Ocean Turbulence. Paper 3; Two-Point Closure Model Momentum, Heat and Salt Vertical Diffusivities in the Presence of Shear

    NASA Technical Reports Server (NTRS)

    Canuto, V. M.; Dubovikov, M. S.; Howard, A.; Cheng, Y.

    1999-01-01

    In papers 1 and 2 we have presented the results of the most updated 1-point closure model for the turbulent vertical diffusivities of momentum, heat and salt, K(sub m,h,s). In this paper, we derive the analytic expressions for K(sub m,h,s) using a new 2-point closure model that has recently been developed and successfully tested against some approx. 80 turbulence statistics for different flows. The new model has no free parameters. The expressions for K(sub m, h. s) are analytical functions of two stability parameters: the Turner number R(sub rho) (salinity gradient/temperature gradient) and the Richardson number R(sub i) (temperature gradient/shear). The turbulent kinetic energy K and its rate of dissipation may be taken local or non-local (K-epsilon model). Contrary to all previous models that to describe turbulent mixing below the mixed layer (ML) have adopted three adjustable "background diffusivities" for momentum. heat and salt, we propose a model that avoids such adjustable diffusivities. We assume that below the ML, K(sub m,h,s) have the same functional dependence on R(sub i) and R(sub rho) derived from the turbulence model. However, in order to compute R(sub i) below the ML, we use data of vertical shear due to wave-breaking measured by Gargett et al. (1981). The procedure frees the model from adjustable background diffusivities and indeed we use the same model throughout the entire vertical extent of the ocean. Using the new K(sub m,h, s), we run an O-GCM and present a variety of results that we compare with Levitus and the KPP model. Since the traditional 1-point (used in papers 1 and 2) and the new 2-point closure models used here represent different modeling philosophies and procedures, testing them in an O-GCM is indispensable. The basic motivation is to show that the new 2-point closure model gives results that are overall superior to the 1-point closure in spite of the fact that the latter rely on several adjustable parameters while the new 2-point closure has none. After the extensive comparisons presented in papers 1 and 2, we conclude that the new model presented here is overall superior for it not only is parameter free but also 2 because is part of a more general turbulence model that has been previously successfully tested on a wide variety of other types of turbulent flows.

  15. Quantifying the effect of mixing on the mean age of air in CCMVal-2 and CCMI-1 models

    NASA Astrophysics Data System (ADS)

    Dietmüller, Simone; Eichinger, Roland; Garny, Hella; Birner, Thomas; Boenisch, Harald; Pitari, Giovanni; Mancini, Eva; Visioni, Daniele; Stenke, Andrea; Revell, Laura; Rozanov, Eugene; Plummer, David A.; Scinocca, John; Jöckel, Patrick; Oman, Luke; Deushi, Makoto; Kiyotaka, Shibata; Kinnison, Douglas E.; Garcia, Rolando; Morgenstern, Olaf; Zeng, Guang; Stone, Kane Adam; Schofield, Robyn

    2018-05-01

    The stratospheric age of air (AoA) is a useful measure of the overall capabilities of a general circulation model (GCM) to simulate stratospheric transport. Previous studies have reported a large spread in the simulation of AoA by GCMs and coupled chemistry-climate models (CCMs). Compared to observational estimates, simulated AoA is mostly too low. Here we attempt to untangle the processes that lead to the AoA differences between the models and between models and observations. AoA is influenced by both mean transport by the residual circulation and two-way mixing; we quantify the effects of these processes using data from the CCM inter-comparison projects CCMVal-2 (Chemistry-Climate Model Validation Activity 2) and CCMI-1 (Chemistry-Climate Model Initiative, phase 1). Transport along the residual circulation is measured by the residual circulation transit time (RCTT). We interpret the difference between AoA and RCTT as additional aging by mixing. Aging by mixing thus includes mixing on both the resolved and subgrid scale. We find that the spread in AoA between the models is primarily caused by differences in the effects of mixing and only to some extent by differences in residual circulation strength. These effects are quantified by the mixing efficiency, a measure of the relative increase in AoA by mixing. The mixing efficiency varies strongly between the models from 0.24 to 1.02. We show that the mixing efficiency is not only controlled by horizontal mixing, but by vertical mixing and vertical diffusion as well. Possible causes for the differences in the models' mixing efficiencies are discussed. Differences in subgrid-scale mixing (including differences in advection schemes and model resolutions) likely contribute to the differences in mixing efficiency. However, differences in the relative contribution of resolved versus parameterized wave forcing do not appear to be related to differences in mixing efficiency or AoA.

  16. BEYOND MIXING-LENGTH THEORY: A STEP TOWARD 321D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arnett, W. David; Meakin, Casey; Viallet, Maxime

    2015-08-10

    We examine the physical basis for algorithms to replace mixing-length theory (MLT) in stellar evolutionary computations. Our 321D procedure is based on numerical solutions of the Navier–Stokes equations. These implicit large eddy simulations (ILES) are three-dimensional (3D), time-dependent, and turbulent, including the Kolmogorov cascade. We use the Reynolds-averaged Navier–Stokes (RANS) formulation to make concise the 3D simulation data, and use the 3D simulations to give closure for the RANS equations. We further analyze this data set with a simple analytical model, which is non-local and time-dependent, and which contains both MLT and the Lorenz convective roll as particular subsets ofmore » solutions. A characteristic length (the damping length) again emerges in the simulations; it is determined by an observed balance between (1) the large-scale driving, and (2) small-scale damping. The nature of mixing and convective boundaries is analyzed, including dynamic, thermal and compositional effects, and compared to a simple model. We find that (1) braking regions (boundary layers in which mixing occurs) automatically appear beyond the edges of convection as defined by the Schwarzschild criterion, (2) dynamic (non-local) terms imply a non-zero turbulent kinetic energy flux (unlike MLT), (3) the effects of composition gradients on flow can be comparable to thermal effects, and (4) convective boundaries in neutrino-cooled stages differ in nature from those in photon-cooled stages (different Péclet numbers). The algorithms are based upon ILES solutions to the Navier–Stokes equations, so that, unlike MLT, they do not require any calibration to astronomical systems in order to predict stellar properties. Implications for solar abundances, helioseismology, asteroseismology, nucleosynthesis yields, supernova progenitors and core collapse are indicated.« less

  17. Beyond Mixing-length Theory: A Step Toward 321D

    NASA Astrophysics Data System (ADS)

    Arnett, W. David; Meakin, Casey; Viallet, Maxime; Campbell, Simon W.; Lattanzio, John C.; Mocák, Miroslav

    2015-08-01

    We examine the physical basis for algorithms to replace mixing-length theory (MLT) in stellar evolutionary computations. Our 321D procedure is based on numerical solutions of the Navier-Stokes equations. These implicit large eddy simulations (ILES) are three-dimensional (3D), time-dependent, and turbulent, including the Kolmogorov cascade. We use the Reynolds-averaged Navier-Stokes (RANS) formulation to make concise the 3D simulation data, and use the 3D simulations to give closure for the RANS equations. We further analyze this data set with a simple analytical model, which is non-local and time-dependent, and which contains both MLT and the Lorenz convective roll as particular subsets of solutions. A characteristic length (the damping length) again emerges in the simulations; it is determined by an observed balance between (1) the large-scale driving, and (2) small-scale damping. The nature of mixing and convective boundaries is analyzed, including dynamic, thermal and compositional effects, and compared to a simple model. We find that (1) braking regions (boundary layers in which mixing occurs) automatically appear beyond the edges of convection as defined by the Schwarzschild criterion, (2) dynamic (non-local) terms imply a non-zero turbulent kinetic energy flux (unlike MLT), (3) the effects of composition gradients on flow can be comparable to thermal effects, and (4) convective boundaries in neutrino-cooled stages differ in nature from those in photon-cooled stages (different Péclet numbers). The algorithms are based upon ILES solutions to the Navier-Stokes equations, so that, unlike MLT, they do not require any calibration to astronomical systems in order to predict stellar properties. Implications for solar abundances, helioseismology, asteroseismology, nucleosynthesis yields, supernova progenitors and core collapse are indicated.

  18. Safety and efficacy of hysteroscopic sterilization compared with laparoscopic sterilization: an observational cohort study.

    PubMed

    Mao, Jialin; Pfeifer, Samantha; Schlegel, Peter; Sedrakyan, Art

    2015-10-13

    To compare the safety and efficacy of hysteroscopic sterilization with the "Essure" device with laparoscopic sterilization in a large, all-inclusive, state cohort. Population based cohort study. Outpatient interventional setting in New York State. Women undergoing interval sterilization procedure, including hysteroscopic sterilization with Essure device and laparoscopic surgery, between 2005 and 2013. Safety events within 30 days of procedures; unintended pregnancies and reoperations within one year of procedures. Mixed model accounting for hospital clustering was used to compare 30 day and 1 year outcomes, adjusting for patient characteristics and other confounders. Time to reoperation was evaluated using frailty model for time to event analysis. We identified 8048 patients undergoing hysteroscopic sterilization and 44,278 undergoing laparoscopic sterilization between 2005 and 2013 in New York State. There was a significant increase in the use of hysteroscopic procedures during this period, while use of laparoscopic sterilization decreased. Patients undergoing hysteroscopic sterilization were older than those undergoing laparoscopic sterilization and were more likely to have a history of pelvic inflammatory disease (10.3% v 7.2%, P<0.01), major abdominal surgery (9.4% v 7.9%, P<0.01), and cesarean section (23.2% v 15.4%, P<0.01). At one year after surgery, hysteroscopic sterilization was not associated with a higher risk of unintended pregnancy (odds ratio 0.84 (95% CI 0.63 to 1.12)) but was associated with a substantially increased risk of reoperation (odds ratio 10.16 (7.47 to 13.81)) compared with laparoscopic sterilization. Patients undergoing hysteroscopic sterilization have a similar risk of unintended pregnancy but a more than 10-fold higher risk of undergoing reoperation compared with patients undergoing laparoscopic sterilization. Benefits and risks of both procedures should be discussed with patients for informed decisions making. © Mao et al 2015.

  19. Analyses and simulations of the upper ocean's response to Hurricane Felix at the Bermuda Testbed Mooring site: 13-23 August 1995

    NASA Astrophysics Data System (ADS)

    Zedler, S. E.; Dickey, T. D.; Doney, S. C.; Price, J. F.; Yu, X.; Mellor, G. L.

    2002-12-01

    The center of Hurricane Felix passed 85 km to the southwest of the Bermuda Testbed Mooring (BTM; 31°44'N, 64°10'W) site on 15 August 1995. Data collected in the upper ocean from the BTM during this encounter provide a rare opportunity to investigate the physical processes that occur in a hurricane's wake. Data analyses indicate that the storm caused a large increase in kinetic energy at near-inertial frequencies, internal gravity waves in the thermocline, and inertial pumping, mixed layer deepening, and significant vertical redistribution of heat, with cooling of the upper 30 m and warming at depths of 30-70 m. The temperature evolution was simulated using four one-dimensional mixed layer models: Price-Weller-Pinkel (PWP), K Profile Parameterization (KPP), Mellor-Yamada 2.5 (MY), and a modified version of MY2.5 (MY2). The primary differences in the model results were in their simulations of temperature evolution. In particular, when forced using a drag coefficient that had a linear dependence on wind speed, the KPP model predicted sea surface cooling, mixed layer currents, and the maximum depth of cooling closer to the observations than any of the other models. This was shown to be partly because of a special parameterization for gradient Richardson number (RgKPP) shear instability mixing in response to resolved shear in the interior. The MY2 model predicted more sea surface cooling and greater depth penetration of kinetic energy than the MY model. In the MY2 model the dissipation rate of turbulent kinetic energy is parameterized as a function of a locally defined Richardson number (RgMY2) allowing for a reduction in dissipation rate for stable Richardson numbers (RgMY2) when internal gravity waves are likely to be present. Sensitivity simulations with the PWP model, which has specifically defined mixing procedures, show that most of the heat lost from the upper layer was due to entrainment (parameterized as a function of bulk Richardson number RbPWP), with the remainder due to local Richardson number (RgPWP) instabilities. With the exception of the MY model the models predicted reasonable estimates of the north and east current components during and after the hurricane passage at 25 and 45 m. Although the results emphasize differences between the modeled responses to a given wind stress, current controversy over the formulation of wind stress from wind speed measurements (including possible sea state and wave age and sheltering effects) cautions against using our results for assessing model skill. In particular, sensitivity studies show that MY2 simulations of the temperature evolution are excellent when the wind stress is increased, albeit with currents that are larger than observed. Sensitivity experiments also indicate that preexisting inertial motion modulated the amplitude of poststorm currents, but that there was probably not a significant resonant response because of clockwise wind rotation for our study site.

  20. [Primary branch size of Pinus koraiensis plantation: a prediction based on linear mixed effect model].

    PubMed

    Dong, Ling-Bo; Liu, Zhao-Gang; Li, Feng-Ri; Jiang, Li-Chun

    2013-09-01

    By using the branch analysis data of 955 standard branches from 60 sampled trees in 12 sampling plots of Pinus koraiensis plantation in Mengjiagang Forest Farm in Heilongjiang Province of Northeast China, and based on the linear mixed-effect model theory and methods, the models for predicting branch variables, including primary branch diameter, length, and angle, were developed. Considering tree effect, the MIXED module of SAS software was used to fit the prediction models. The results indicated that the fitting precision of the models could be improved by choosing appropriate random-effect parameters and variance-covariance structure. Then, the correlation structures including complex symmetry structure (CS), first-order autoregressive structure [AR(1)], and first-order autoregressive and moving average structure [ARMA(1,1)] were added to the optimal branch size mixed-effect model. The AR(1) improved the fitting precision of branch diameter and length mixed-effect model significantly, but all the three structures didn't improve the precision of branch angle mixed-effect model. In order to describe the heteroscedasticity during building mixed-effect model, the CF1 and CF2 functions were added to the branch mixed-effect model. CF1 function improved the fitting effect of branch angle mixed model significantly, whereas CF2 function improved the fitting effect of branch diameter and length mixed model significantly. Model validation confirmed that the mixed-effect model could improve the precision of prediction, as compare to the traditional regression model for the branch size prediction of Pinus koraiensis plantation.

  1. Variability in Non-Cardiac Surgical Procedures in Children with Congenital Heart Disease

    PubMed Central

    Sulkowski, Jason P.; Cooper, Jennifer N.; McConnell, Patrick I.; Pasquali, Sara K.; Shah, Samir S.; Minneci, Peter C.; Deans, Katherine J.

    2014-01-01

    Background The purpose of this study was to examine the volume and variability of non-cardiac surgeries performed in children with congenital heart disease (CHD) requiring cardiac surgery in the first year of life. Methods Patients who underwent cardiac surgery by 1 year of age and had a minimum 5-year follow-up at 22 of the hospitals contributing to the Pediatric Health Information System database between 2004–2012 were included. Frequencies of non-cardiac surgical procedures by age 5 years were determined and categorized by subspecialty. Patients were stratified according to their maximum RACHS-1 (Risk Adjustment in Congenital Heart Surgery) category. The proportions of patients across hospitals who had a non-cardiac surgical procedure for each subspecialty were compared using logistic mixed effects models. Results 8,857 patients underwent congenital heart surgery during the first year of life, 3,621 (41%) of whom had 13,894 non-cardiac surgical procedures by 5 years. Over half of all procedures were in general surgery (4,432; 31.9%) or otolaryngology (4,002; 28.8%). There was significant variation among hospitals in the proportion of CHD patients having non-cardiac surgical procedures. Compared to children in the low risk group (RACHS-1 categories 1–3), children in the high-risk group (categories 4–6) were more likely to have general, dental, orthopedic, and thoracic procedures. Conclusions Children with CHD requiring cardiac surgery frequently also undergo non-cardiac surgical procedures; however, considerable variability in the frequency of these procedures exists across hospitals. This suggests a lack of uniformity in indications used for surgical intervention. Further research should aim to better standardize care for this complex patient population. PMID:25475794

  2. Implementation of a Self-Management System for Students with Disabilities in General Education Settings

    ERIC Educational Resources Information Center

    Schulze, Margaret A.

    2016-01-01

    Despite the fact that self-management procedures have a robust literature base attesting to their efficacy with students with disabilities, the use of these strategies in general education settings remains limited. This mixed methods study examined the implementation of self-management procedures using both quantitative and qualitative methods.…

  3. The Misconceptions of Abuse by School Personnel: A Public School Perspective

    ERIC Educational Resources Information Center

    Starling, Kathleen

    2011-01-01

    The purpose of this study was to determine the effectiveness of school personnel's perceptions, their need to examine their sexual abuse polices procedures, provide appropriate training to allocate those policies and procedures to all stakeholders in their educational school setting. A mixed methods study was used to explore the hypothesis and…

  4. A mixed multiscale model better accounting for the cross term of the subgrid-scale stress and for backscatter

    NASA Astrophysics Data System (ADS)

    Thiry, Olivier; Winckelmans, Grégoire

    2016-02-01

    In the large-eddy simulation (LES) of turbulent flows, models are used to account for the subgrid-scale (SGS) stress. We here consider LES with "truncation filtering only" (i.e., that due to the LES grid), thus without regular explicit filtering added. The SGS stress tensor is then composed of two terms: the cross term that accounts for interactions between resolved scales and unresolved scales, and the Reynolds term that accounts for interactions between unresolved scales. Both terms provide forward- (dissipation) and backward (production, also called backscatter) energy transfer. Purely dissipative, eddy-viscosity type, SGS models are widely used: Smagorinsky-type models, or more advanced multiscale-type models. Dynamic versions have also been developed, where the model coefficient is determined using a dynamic procedure. Being dissipative by nature, those models do not provide backscatter. Even when using the dynamic version with local averaging, one typically uses clipping to forbid negative values of the model coefficient and hence ensure the stability of the simulation; hence removing the backscatter produced by the dynamic procedure. More advanced SGS model are thus desirable, and that better conform to the physics of the true SGS stress, while remaining stable. We here investigate, in decaying homogeneous isotropic turbulence, and using a de-aliased pseudo-spectral method, the behavior of the cross term and of the Reynolds term: in terms of dissipation spectra, and in terms of probability density function (pdf) of dissipation in physical space: positive and negative (backscatter). We then develop a new mixed model that better accounts for the physics of the SGS stress and for the backscatter. It has a cross term part which is built using a scale-similarity argument, further combined with a correction for Galilean invariance using a pseudo-Leonard term: this is the term that also does backscatter. It also has an eddy-viscosity multiscale model part that accounts for all the remaining phenomena (also for the incompleteness of the cross term model), that is dynamic and that adjusts the overall dissipation. The model is tested, both a priori and a posteriori, and is compared to the direct numerical simulation and to the exact SGS terms, also in time. The model is seen to provide accurate energy spectra, also in comparison to the dynamic Smagorinsky model. It also provides significant backscatter (although four times less than the real SGS stress), while remaining stable.

  5. Development of an in-house mixed-mode solid-phase extraction for the determination of 16 basic drugs in urine by High Performance Liquid Chromatography-Ion Trap Mass Spectrometry.

    PubMed

    Musile, Giacomo; Cenci, Lucia; Piletska, Elena; Gottardo, Rossella; Bossi, Alessandra M; Bortolotti, Federica

    2018-07-27

    The aim of the present work was to develop a novel in-house mixed-mode SPE sorbent to be used for the HPLC-Ion TrapMS determination of 16 basic drugs in urine. By using a computational modelling, a virtual monomer library was screened identifying three suitable functional monomers, methacrylic acid (MAA), itaconic acid (IA) and 2-acrylamide-2-methylpropane sulfonic acid (AMPSA), respectively. Three different sorbents were then synthetized based on these monomers, and using as cross-linker trimethylolpropane trimethacrylate (TMPTMA). The sorbent characterization analyses brought to the selection of the AMPSA based phase. Using this novel in-house sorbent, a SPE-HPLC-Ion TrapMS method for drug analysis in urine was validated proving to be selective and accurate and showing a sensitivity adequate for toxicological urine analysis. The comparison of the in-house mixed-mode SPE sorbent with two analogous commercial mixed-mode SPE phases showed that the first one was better not only in terms of process efficiency, but also in terms of quality-price rate. To the best of our knowledge, this is the first time in which an in-house SPE procedure has been applied to the toxicological analysis of a complex matrix, such as urine. Copyright © 2018 Elsevier B.V. All rights reserved.

  6. A continuous mixing model for pdf simulations and its applications to combusting shear flows

    NASA Technical Reports Server (NTRS)

    Hsu, A. T.; Chen, J.-Y.

    1991-01-01

    The problem of time discontinuity (or jump condition) in the coalescence/dispersion (C/D) mixing model is addressed in this work. A C/D mixing model continuous in time is introduced. With the continuous mixing model, the process of chemical reaction can be fully coupled with mixing. In the case of homogeneous turbulence decay, the new model predicts a pdf very close to a Gaussian distribution, with finite higher moments also close to that of a Gaussian distribution. Results from the continuous mixing model are compared with both experimental data and numerical results from conventional C/D models.

  7. Heterogeneous effects of health insurance on out-of-pocket expenditure on medicines in Mexico.

    PubMed

    Wirtz, Veronika J; Santa-Ana-Tellez, Yared; Servan-Mori, Edson; Avila-Burgos, Leticia

    2012-01-01

    Given the importance of health insurance for financing medicines and recent policy changes designed to reduce health-related out-of-pocket expenditure (OOPE) in Mexico, our study examined and analyzed the effect of health insurance on the probability and amount of OOPE for medicines and the proportion spent from household available expenditure (AE) funds. We conducted a cross-sectional analysis by using the Mexican National Household Survey of Income and Expenditures for 2008. Households were grouped according to household medical insurance type (Social Security, Seguro Popular, mixed, or no affiliation). OOPE for medicines and health costs, and the probability of occurrence, were estimated with linear regression models; subsequently, the proportion of health expenditures from AE was calculated. The Heckman selection procedure was used to correct for self-selection of health expenditure; a propensity score matching procedure and an alternative procedure using instrumental variables were used to correct for heterogeneity between households with and without Seguro Popular. OOPE in medicines account for 66% of the total health expenditures and 5% of the AE. Households with health insurance had a lower probability of OOPE for medicines than their comparison groups. There was heterogeneity in the health insurance effect on the proportion of OOPE for medicines out of the AE, with a reduction of 1.7% for households with Social Security, 1.4% for mixed affiliation, but no difference between Seguro Popular and matched households without insurance. Medicines were the most prevalent component of health expenditures in Mexico. We recommend improving access to health services and strengthening access to medicines to reduce high OOPE. Copyright © 2012 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  8. Incorporating Comorbidity Within Risk Adjustment for UK Pediatric Cardiac Surgery.

    PubMed

    Brown, Katherine L; Rogers, Libby; Barron, David J; Tsang, Victor; Anderson, David; Tibby, Shane; Witter, Thomas; Stickley, John; Crowe, Sonya; English, Kate; Franklin, Rodney C; Pagel, Christina

    2017-07-01

    When considering early survival rates after pediatric cardiac surgery it is essential to adjust for risk linked to case complexity. An important but previously less well understood component of case mix complexity is comorbidity. The National Congenital Heart Disease Audit data representing all pediatric cardiac surgery procedures undertaken in the United Kingdom and Ireland between 2009 and 2014 was used to develop and test groupings for comorbidity and additional non-procedure-based risk factors within a risk adjustment model for 30-day mortality. A mixture of expert consensus based opinion and empiric statistical analyses were used to define and test the new comorbidity groups. The study dataset consisted of 21,838 pediatric cardiac surgical procedure episodes in 18,834 patients with 539 deaths (raw 30-day mortality rate, 2.5%). In addition to surgical procedure type, primary cardiac diagnosis, univentricular status, age, weight, procedure type (bypass, nonbypass, or hybrid), and era, the new risk factor groups of non-Down congenital anomalies, acquired comorbidities, increased severity of illness indicators (eg, preoperative mechanical ventilation or circulatory support) and additional cardiac risk factors (eg, heart muscle conditions and raised pulmonary arterial pressure) all independently increased the risk of operative mortality. In an era of low mortality rates across a wide range of operations, non-procedure-based risk factors form a vital element of risk adjustment and their presence leads to wide variations in the predicted risk of a given operation. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  9. Numerical modeling of turbulent swirling flow in a multi-inlet vortex nanoprecipitation reactor using dynamic DDES

    NASA Astrophysics Data System (ADS)

    Hill, James C.; Liu, Zhenping; Fox, Rodney O.; Passalacqua, Alberto; Olsen, Michael G.

    2015-11-01

    The multi-inlet vortex reactor (MIVR) has been developed to provide a platform for rapid mixing in the application of flash nanoprecipitation (FNP) for manufacturing functional nanoparticles. Unfortunately, commonly used RANS methods are unable to accurately model this complex swirling flow. Large eddy simulations have also been problematic, as expensive fine grids to accurately model the flow are required. These dilemmas led to the strategy of applying a Delayed Detached Eddy Simulation (DDES) method to the vortex reactor. In the current work, the turbulent swirling flow inside a scaled-up MIVR has been investigated by using a dynamic DDES model. In the DDES model, the eddy viscosity has a form similar to the Smagorinsky sub-grid viscosity in LES and allows the implementation of a dynamic procedure to determine its coefficient. The complex recirculating back flow near the reactor center has been successfully captured by using this dynamic DDES model. Moreover, the simulation results are found to agree with experimental data for mean velocity and Reynolds stresses.

  10. Manganese L-edge X-ray absorption spectroscopy of manganese catalase from Lactobacillus plantarum and mixed valence manganese complexes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grush, M.M.; Chen, J.; George, S.J.

    1996-01-10

    The first Mn L-edge absorption spectra of a Mn metalloprotein are presented in this paper. Both reduced and superoxidized Mn catalase have been examined by fluorescence-detected soft X-ray absorption spectroscopy, and their Mn L-edge spectra are dramatically different. The spectrum of reduced Mn(II)Mn(II) catalase has been interpreted by ligand field atomic multiplet calculations and by comparison to model compound spectra. The analysis finds a 10 Dq value of nearly 1.1 eV, consistent with coordination by predominately nitrogen and oxygen donor ligands. For interpretation of mixed valence Mn spectra, an empirical simulation procedure based on the addition of homovalent model compoundmore » spectra has been developed and was tested on a variety of Mn complexes and superoxidized Mn catalase. This routine was also used to determine the oxidation state composition of the Mn in [Ba{sub 8}Na{sub 2}ClMn{sub 16}(OH){sub 8}(CO{sub 3}){sub 4}L{sub 8}] .53 H{sub 2}O (L=1,3-diamino-2-hydroxypropane-N,N,N`N`-tetraacetic acid). 27 refs., 6 figs.« less

  11. Hydro-Economic Modeling with Minimum Data Requirements: An Application to the São Francisco River Basin, Brazil

    NASA Astrophysics Data System (ADS)

    Torres, M.; Maneta, M.; Vosti, S.; Wallender, W.; Howitt, R.

    2008-12-01

    Policymakers have been charged with the efficient, equitable, and sustainable use of water resources of the São Francisco River Basin (SFRB), Brazil, and also with the promotion of economic growth and the reduction of poverty within the basin. To date, policymakers lack scientific evidence on the potential consequences for growth, poverty alleviation or environmental sustainability of alternative uses of water resources. To address these key knowledge gaps, we have linked a hydrologic and an economic model of agriculture to investigate how economic decisions affect available water, and vice versa. More specifically, the models are used to predict the effects of the application of Brazilian federal surface water use policies on farmer's net revenues and on the hydrologic system. The Economic Model of Agriculture. A spatially explicit, farm-level model capable of accommodating a broad array of farm sizes and farm/farmer characteristics is developed and used to predict the effects of alternative water policies and neighbors' water use patterns on crop mix choice. A production function comprised of seven categories of non-water-related inputs used in agriculture (land, fertilizers, pesticides, seeds, hired labor, family labor and machinery) and four water-related inputs used in agriculture (applied water, irrigation labor, irrigation capital and energy) is estimated. The parameters emerging from this estimated production function are then introduced into a non-linear, net revenue maximization positive mathematical programming algorithm that is used for simulations. The Hydrological Model. MIKE Basin, a semi-distributed hydrology model, is used to calculate water budgets for the SFRB. MIKE Basin calculates discharge at selected nodes by accumulating runoff down the river network; it simulates reservoirs using stage-area-storage and downstream release rule curves. The data used to run the model are discharge to calculate local runoff, precipitation, reference ET, crop coefficients to calculate adjusted transpiration, and reservoir operating rules. Linking the Hydro and Economic Models. Based on the crop mix and area under plow of a reference year the economic model of agriculture was calibrated. Following a Monte Carlo procedure, the statistical distribution of water flows was estimated at each of the 16 selected nodes in the SFRB. The 5th and 95th percentiles of that distribution were used as benchmarks of water availability for drought and wet years, respectively. After subtracting 2000 m3 s-1 reserved for downstream uses estimates of water availability are used as constraints in the net revenue maximization algorithm included in the model of agriculture economics. If water is binding, it will influence crop mix, area under plow and product mix choices. Results. The application of the Brazilian federal water use policies will have immediate and substantial effects on agricultural area and product mix. Agricultural incomes will fall, especially in downstream areas located near major river channels.

  12. Application of zero-inflated poisson mixed models in prognostic factors of hepatitis C.

    PubMed

    Akbarzadeh Baghban, Alireza; Pourhoseingholi, Asma; Zayeri, Farid; Jafari, Ali Akbar; Alavian, Seyed Moayed

    2013-01-01

    In recent years, hepatitis C virus (HCV) infection represents a major public health problem. Evaluation of risk factors is one of the solutions which help protect people from the infection. This study aims to employ zero-inflated Poisson mixed models to evaluate prognostic factors of hepatitis C. The data was collected from a longitudinal study during 2005-2010. First, mixed Poisson regression (PR) model was fitted to the data. Then, a mixed zero-inflated Poisson model was fitted with compound Poisson random effects. For evaluating the performance of the proposed mixed model, standard errors of estimators were compared. The results obtained from mixed PR showed that genotype 3 and treatment protocol were statistically significant. Results of zero-inflated Poisson mixed model showed that age, sex, genotypes 2 and 3, the treatment protocol, and having risk factors had significant effects on viral load of HCV patients. Of these two models, the estimators of zero-inflated Poisson mixed model had the minimum standard errors. The results showed that a mixed zero-inflated Poisson model was the almost best fit. The proposed model can capture serial dependence, additional overdispersion, and excess zeros in the longitudinal count data.

  13. pLARmEB: integration of least angle regression with empirical Bayes for multilocus genome-wide association studies.

    PubMed

    Zhang, J; Feng, J-Y; Ni, Y-L; Wen, Y-J; Niu, Y; Tamba, C L; Yue, C; Song, Q; Zhang, Y-M

    2017-06-01

    Multilocus genome-wide association studies (GWAS) have become the state-of-the-art procedure to identify quantitative trait nucleotides (QTNs) associated with complex traits. However, implementation of multilocus model in GWAS is still difficult. In this study, we integrated least angle regression with empirical Bayes to perform multilocus GWAS under polygenic background control. We used an algorithm of model transformation that whitened the covariance matrix of the polygenic matrix K and environmental noise. Markers on one chromosome were included simultaneously in a multilocus model and least angle regression was used to select the most potentially associated single-nucleotide polymorphisms (SNPs), whereas the markers on the other chromosomes were used to calculate kinship matrix as polygenic background control. The selected SNPs in multilocus model were further detected for their association with the trait by empirical Bayes and likelihood ratio test. We herein refer to this method as the pLARmEB (polygenic-background-control-based least angle regression plus empirical Bayes). Results from simulation studies showed that pLARmEB was more powerful in QTN detection and more accurate in QTN effect estimation, had less false positive rate and required less computing time than Bayesian hierarchical generalized linear model, efficient mixed model association (EMMA) and least angle regression plus empirical Bayes. pLARmEB, multilocus random-SNP-effect mixed linear model and fast multilocus random-SNP-effect EMMA methods had almost equal power of QTN detection in simulation experiments. However, only pLARmEB identified 48 previously reported genes for 7 flowering time-related traits in Arabidopsis thaliana.

  14. Regional health workforce monitoring as governance innovation: a German model to coordinate sectoral demand, skill mix and mobility.

    PubMed

    Kuhlmann, E; Lauxen, O; Larsen, C

    2016-11-28

    As health workforce policy is gaining momentum, data sources and monitoring systems have significantly improved in the European Union and internationally. Yet data remain poorly connected to policy-making and implementation and often do not adequately support integrated approaches. This brings the importance of governance and the need for innovation into play. The present case study introduces a regional health workforce monitor in the German Federal State of Rhineland-Palatinate and seeks to explore the capacity of monitoring to innovate health workforce governance. The monitor applies an approach from the European Network on Regional Labour Market Monitoring to the health workforce. The novel aspect of this model is an integrated, procedural approach that promotes a 'learning system' of governance based on three interconnected pillars: mixed methods and bottom-up data collection, strong stakeholder involvement with complex communication tools and shared decision- and policy-making. Selected empirical examples illustrate the approach and the tools focusing on two aspects: the connection between sectoral, occupational and mobility data to analyse skill/qualification mixes and the supply-demand matches and the connection between monitoring and stakeholder-driven policy. Regional health workforce monitoring can promote effective governance in high-income countries like Germany with overall high density of health workers but maldistribution of staff and skills. The regional stakeholder networks are cost-effective and easily accessible and might therefore be appealing also to low- and middle-income countries.

  15. The effect of stunning methods and season on muscle texture hardness in Atlantic salmon (Salmo salar L.).

    PubMed

    Merkin, Grigory V; Stien, Lars Helge; Pittman, Karin; Nortvedt, Ragnar

    2014-06-01

    Commercially collected records of Atlantic salmon (Salmo salar L.) muscle texture hardness were used to evaluate the effect of slaughter procedures and seasonality on texture quality. A database collected by Marine Harvest® contained flesh hardness records of Atlantic salmon slaughtered at processing plants in Norway from summer 2010 to summer 2011. The fish were slaughtered either by (1) percussion followed by automated bleeding ("Percussive") or (2) live chilling with exposure to carbon dioxide (CO2 ) followed by manual severing gill arches and bleeding ("CO2 ") or (3) live chilling with exposure to CO2 followed by percussive stunning and at the end automated bleeding ("CO2 ·percussive"). Hardness in salmon muscle cutlets was measured in Newtons (N) by Materials Testing Machine Zwick 500N. The hardness in salmon varied significantly over the study period (P < 0.05, mixed effect model) and showed the softest value of 21.2 (± 0.7) Newton (N) in summer 2011 and hardest 24.1 (± 0.2) N in autumn 2010. Slaughter procedures had a significant effect on salmon muscle hardness (P < 0.05, mixed effect model), where percussion followed by automated bleeding resulted in the hardest value (24.0 ± 0.4 N) as compared with CO2 stunning (21.8 ± 0.2 N) and combination of CO2 and percussive stunning (23.1 ± 0.15 N). CO2 is suspected as a causal factor in accelerated postmortem softening of the salmon muscle. Commercial use of CO2 in combination with live chilling results in accelerated postmortem softening of the muscle tissue in salmon and should be avoided. © 2014 Institute of Food Technologists®

  16. A time dependent mixing model to close PDF equations for transport in heterogeneous aquifers

    NASA Astrophysics Data System (ADS)

    Schüler, L.; Suciu, N.; Knabner, P.; Attinger, S.

    2016-10-01

    Probability density function (PDF) methods are a promising alternative to predicting the transport of solutes in groundwater under uncertainty. They make it possible to derive the evolution equations of the mean concentration and the concentration variance, used in moment methods. The mixing model, describing the transport of the PDF in concentration space, is essential for both methods. Finding a satisfactory mixing model is still an open question and due to the rather elaborate PDF methods, a difficult undertaking. Both the PDF equation and the concentration variance equation depend on the same mixing model. This connection is used to find and test an improved mixing model for the much easier to handle concentration variance. Subsequently, this mixing model is transferred to the PDF equation and tested. The newly proposed mixing model yields significantly improved results for both variance modelling and PDF modelling.

  17. [Complications after procedure for prolapse and hemorrhoids for circular hemorrhoids].

    PubMed

    Zhu, Jun; Ding, Jian-hua; Zhao, Ke; Zhang, Bin; Zhao, Yong; Tang, Hai-yan; Zhao, Yu-juan

    2012-12-01

    To investigate the perioperative and postoperative long-term complications of procedure for prolapse and hemorrhoids(PPH) for the treatment of circular internal hemorrhoids and circular mixed hemorrhoids. A retrospective study was performed in 2152 patients with circular internal hemorrhoids and circular mixed hemorrhoids eligible for PPH from January 2002 to December 2011. The perioperative and postoperative long-term complications were recorded and assessed. The median length of follow-up was 73 months. Perioperative complications and adverse events were reported including acute urinary retention(n=360, 16.7%) which was managed by temporary cathether indwelling, anastomotic bleeding(n=45, 2.1%) managed by surgical or endoscopic procedures, chronic anoperineal sustained pain(n=30, 1.4%) managed by local treatment or stapler removal, and thrombosed external hemorrhoid(n=28, 1.2%) managed by conservative treatment or resection. Long-term postoperative complications were reported including mild fecal incontinence(n=112, 6.3%), postoperative recurrence(n=82, 4.6%), anal distention and defecatory urgency(n=50, 2.8%), anastomotic stenosis(n=4, 0.2%). Postoperative recurrence developed in 82 patients(4.6%), 28 of whom were managed by repeat PPH and 54 by conservative treatment. PPH appears to be a safe technique for patients with circular internal hemorrhoids and circular mixed hemorrhoids.

  18. Mixed reality ventriculostomy simulation: experience in neurosurgical residency.

    PubMed

    Hooten, Kristopher G; Lister, J Richard; Lombard, Gwen; Lizdas, David E; Lampotang, Samsun; Rajon, Didier A; Bova, Frank; Murad, Gregory J A

    2014-12-01

    Medicine and surgery are turning toward simulation to improve on limited patient interaction during residency training. Many simulators today use virtual reality with augmented haptic feedback with little to no physical elements. In a collaborative effort, the University of Florida Department of Neurosurgery and the Center for Safety, Simulation & Advanced Learning Technologies created a novel "mixed" physical and virtual simulator to mimic the ventriculostomy procedure. The simulator contains all the physical components encountered for the procedure with superimposed 3-D virtual elements for the neuroanatomical structures. To introduce the ventriculostomy simulator and its validation as a necessary training tool in neurosurgical residency. We tested the simulator in more than 260 residents. An algorithm combining time and accuracy was used to grade performance. Voluntary postperformance surveys were used to evaluate the experience. Results demonstrate that more experienced residents have statistically significant better scores and completed the procedure in less time than inexperienced residents. Survey results revealed that most residents agreed that practice on the simulator would help with future ventriculostomies. This mixed reality simulator provides a real-life experience, and will be an instrumental tool in training the next generation of neurosurgeons. We have now implemented a standard where incoming residents must prove efficiency and skill on the simulator before their first interaction with a patient.

  19. Unifying error structures in commonly used biotracer mixing models.

    PubMed

    Stock, Brian C; Semmens, Brice X

    2016-10-01

    Mixing models are statistical tools that use biotracers to probabilistically estimate the contribution of multiple sources to a mixture. These biotracers may include contaminants, fatty acids, or stable isotopes, the latter of which are widely used in trophic ecology to estimate the mixed diet of consumers. Bayesian implementations of mixing models using stable isotopes (e.g., MixSIR, SIAR) are regularly used by ecologists for this purpose, but basic questions remain about when each is most appropriate. In this study, we describe the structural differences between common mixing model error formulations in terms of their assumptions about the predation process. We then introduce a new parameterization that unifies these mixing model error structures, as well as implicitly estimates the rate at which consumers sample from source populations (i.e., consumption rate). Using simulations and previously published mixing model datasets, we demonstrate that the new error parameterization outperforms existing models and provides an estimate of consumption. Our results suggest that the error structure introduced here will improve future mixing model estimates of animal diet. © 2016 by the Ecological Society of America.

  20. Lagrangian mixed layer modeling of the western equatorial Pacific

    NASA Technical Reports Server (NTRS)

    Shinoda, Toshiaki; Lukas, Roger

    1995-01-01

    Processes that control the upper ocean thermohaline structure in the western equatorial Pacific are examined using a Lagrangian mixed layer model. The one-dimensional bulk mixed layer model of Garwood (1977) is integrated along the trajectories derived from a nonlinear 1 1/2 layer reduced gravity model forced with actual wind fields. The Global Precipitation Climatology Project (GPCP) data are used to estimate surface freshwater fluxes for the mixed layer model. The wind stress data which forced the 1 1/2 layer model are used for the mixed layer model. The model was run for the period 1987-1988. This simple model is able to simulate the isothermal layer below the mixed layer in the western Pacific warm pool and its variation. The subduction mechanism hypothesized by Lukas and Lindstrom (1991) is evident in the model results. During periods of strong South Equatorial Current, the warm and salty mixed layer waters in the central Pacific are subducted below the fresh shallow mixed layer in the western Pacific. However, this subduction mechanism is not evident when upwelling Rossby waves reach the western equatorial Pacific or when a prominent deepening of the mixed layer occurs in the western equatorial Pacific or when a prominent deepening of the mixed layer occurs in the western equatorial Pacific due to episodes of strong wind and light precipitation associated with the El Nino-Southern Oscillation. Comparison of the results between the Lagrangian mixed layer model and a locally forced Eulerian mixed layer model indicated that horizontal advection of salty waters from the central Pacific strongly affects the upper ocean salinity variation in the western Pacific, and that this advection is necessary to maintain the upper ocean thermohaline structure in this region.

  1. Photosynthetically active radiation measurements in pure pine and mixed pine forests in Poland

    Treesearch

    Jaroslaw Smialkowski

    1998-01-01

    Photosynthetically active radiation (PAR) has been measured in pure pine and mixed pine forests on 15 sites in two transects in Poland: the "climatic" (from the western to the eastern border), and the "Silesian" (from the most to the less polluted part of the country). PAR was measured by using the standard procedure developed by the USDA Forest...

  2. Students' General Knowledge of the Learning Process: A Mixed Methods Study Illustrating Integrated Data Collection and Data Consolidation

    ERIC Educational Resources Information Center

    van Velzen, Joke H.

    2018-01-01

    There were two purposes for this mixed methods study: to investigate (a) the realistic meaning of awareness and understanding as the underlying constructs of general knowledge of the learning process and (b) a procedure for data consolidation. The participants were 11th-grade high school and first-year university students. Integrated data…

  3. RCRA Refresher Self-Study, Course 28582

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simpson, Lewis Edward

    Federal and state regulations require hazardous and mixed waste facility workers at treatment and storage facilities (TSFs) and <90-day accumulation areas to be trained in hazardous and mixed waste management. This course will refamiliarize and update <90-day accumulation area workers, TSF workers, and supervisors of TSF workers regarding waste identification, pollution prevention, storage area requirements, emergency response procedures, and record-keeping requirements.

  4. RCRA Personnel Training, Course 7488

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simpson, Lewis Edward

    Federal and state regulations require hazardous and mixed waste facility workers at treatment and storage facilities (TSFs) and <90-day accumulation areas to be trained in hazardous and mixed waste management. This course will refamiliarize and update <90-day accumulation area workers, TSF workers, and supervisors of TSF workers regarding waste identification, pollution prevention, storage area requirements, emergency response procedures, and record-keeping requirements.

  5. Mixed-Mode Bending Method for Delamination Testing

    NASA Technical Reports Server (NTRS)

    Reeder, James R.; Crews, John R., Jr.

    1990-01-01

    A mixed mode delamination test procedure was developed combining double cantilever beam (DCB) mode I loading and end-notch fixture (ENF) mode II loading on a split unidirectional laminate. By loading with a lever, a single applied load simultaneously produces mode I and mode II bending loads on the specimen. This mixed-mode bending (MMB) test was analyzed using both finite-element procedures and beam theory to calculate the mode I and mode II components of strain-energy release rate G(sub I) and G(sub II), respectively. A wide range of G(sub I)/G(sub II) ratios can be produced by varying the load position on the lever. As the delamination extended, the G(sub I)/G(sub II) ratios varied by less than 5%. Beam theory equations agreed closely with the finite-element results and provide a basis for selection of G(sub I)/G(sub II) test ratios and a basis for computing the mode I and mode II components of measured delamination toughness. The MMB test was demonstrated using AS4/PEEK (APC2) unidirectional laminates. The MMB test introduced in this paper is rather simple and is believed to offer several advantages over most current mixed-mode test.

  6. Manure sampling procedures and nutrient estimation by the hydrometer method for gestation pigs.

    PubMed

    Zhu, Jun; Ndegwa, Pius M; Zhang, Zhijian

    2004-05-01

    Three manure agitation procedures were examined in this study (vertical mixing, horizontal mixing, and no mixing) to determine the efficacy of producing a representative manure sample. The total solids content for manure from gestation pigs was found to be well correlated with the total nitrogen (TN) and total phosphorus (TP) concentrations in the manure, with highly significant correlation coefficients of 0.988 and 0.994, respectively. Linear correlations were observed between the TN and TP contents and the manure specific gravity (correlation coefficients: 0.991 and 0.987, respectively). Therefore, it may be inferred that the nutrients in pig manure can be estimated with reasonable accuracy by measuring the liquid manure specific gravity. A rapid testing method for manure nutrient contents (TN and TP) using a soil hydrometer was also evaluated. The results showed that the estimating error increased from +/-10% to +/-30% with the decrease in TN (from 1000 to 100 ppm) and TP (from 700 to 50 ppm) concentrations in the manure. Data also showed that the hydrometer readings had to be taken within 10 s after mixing to avoid reading drift in specific gravity due to the settling of manure solids.

  7. Deficits in complex motor functions, despite no evidence of procedural learning deficits, among HIV+ individuals with history of substance dependence

    PubMed Central

    Gonzalez, Raul; Jacobus, Joanna; Amatya, Anup K.; Quartana, Phillip J.; Vassileva, Jasmin; Martin, Eileen M.

    2008-01-01

    HIV and drugs of abuse affect common neural systems underlying procedural memory, including the striatum. We compared performance of 48 HIV seropositive (HIV+) and 48 HIV seronegative (HIV−) participants with history of cocaine and/or heroin dependence across multiple Trial Blocks of three procedural learning (PL) tasks: Rotary Pursuit (RPT), Mirror Star Tracing (MST), and Weather Prediction (WPT). Groups were well matched on demographic, psychiatric, and substance use parameters, and all participants were verified abstinent from drugs. Mixed model ANOVAs revealed that the HIV+ group performed more poorly across all tasks, with a significant main effect of HIV serostatus observed on the MST and a trend toward significance obtained for the RPT. No significant differences were observed on the WPT. Both groups demonstrated significant improvements in performance across all three PL tasks. Importantly, no significant Serostatus X Trial Block interactions were observed on any task. Thus, the HIV+ group tended to perform worse than the HIV− group across all trial blocks of PL tasks with motor demands, but showed no differences in their rate of improvement across all tasks. These findings are consistent with HIV-associated deficits in complex motor skills, but not in procedural learning. PMID:18999351

  8. Ambulatory Surgery Centers and Their Intended Effects on Outpatient Surgery.

    PubMed

    Hollenbeck, Brent K; Dunn, Rodney L; Suskind, Anne M; Strope, Seth A; Zhang, Yun; Hollingsworth, John M

    2015-10-01

    To assess the impact of ambulatory surgery centers (ASCs) on rates of hospital-based outpatient procedures and adverse events. Twenty percent national sample of Medicare beneficiaries. A retrospective study of beneficiaries undergoing outpatient surgery between 2001 and 2010. Health care markets were sorted into three groups-those with ASCs, those without ASCs, and those where one opened for the first time. Generalized linear mixed models were used to assess the impact of ASC opening on rates of hospital-based outpatient surgery, perioperative mortality, and hospital admission. Adjusted hospital-based outpatient surgery rates declined by 7 percent, or from 2,333 to 2,163 procedures per 10,000 beneficiaries, in markets where an ASC opened for the first time (p < .001 for test between slopes). Within these markets, procedure use at ASCs outpaced the decline observed in the hospital setting. Perioperative mortality and admission rates remained flat after ASC opening (both p > .4 for test between slopes). The opening of an ASC in a Hospital Service Area resulted in a decline in hospital-based outpatient surgery without increasing mortality or admission. In markets where facilities opened, procedure growth at ASCs was greater than the decline in outpatient surgery use at their respective hospitals. © Health Research and Educational Trust.

  9. Miscibility and Thermodynamics of Mixing of Different Models of Formamide and Water in Computer Simulation.

    PubMed

    Kiss, Bálint; Fábián, Balázs; Idrissi, Abdenacer; Szőri, Milán; Jedlovszky, Pál

    2017-07-27

    The thermodynamic changes that occur upon mixing five models of formamide and three models of water, including the miscibility of these model combinations itself, is studied by performing Monte Carlo computer simulations using an appropriately chosen thermodynamic cycle and the method of thermodynamic integration. The results show that the mixing of these two components is close to the ideal mixing, as both the energy and entropy of mixing turn out to be rather close to the ideal term in the entire composition range. Concerning the energy of mixing, the OPLS/AA_mod model of formamide behaves in a qualitatively different way than the other models considered. Thus, this model results in negative, while the other ones in positive energy of mixing values in combination with all three water models considered. Experimental data supports this latter behavior. Although the Helmholtz free energy of mixing always turns out to be negative in the entire composition range, the majority of the model combinations tested either show limited miscibility, or, at least, approach the miscibility limit very closely in certain compositions. Concerning both the miscibility and the energy of mixing of these model combinations, we recommend the use of the combination of the CHARMM formamide and TIP4P water models in simulations of water-formamide mixtures.

  10. Patient out-of-pocket spending in cranial neurosurgery: single-institution analysis of 6569 consecutive cases and literature review.

    PubMed

    Yoon, Seungwon; Mooney, Michael A; Bohl, Michael A; Sheehy, John P; Nakaji, Peter; Little, Andrew S; Lawton, Michael T

    2018-05-01

    OBJECTIVE With drastic changes to the health insurance market, patient cost sharing has significantly increased in recent years. However, the patient financial burden, or out-of-pocket (OOP) costs, for surgical procedures is poorly understood. The goal of this study was to analyze patient OOP spending in cranial neurosurgery and identify drivers of OOP spending growth. METHODS For 6569 consecutive patients who underwent cranial neurosurgery from 2013 to 2016 at the authors' institution, the authors created univariate and multivariate mixed-effects models to investigate the effect of patient demographic and clinical factors on patient OOP spending. The authors examined OOP payments stratified into 10 subsets of case categories and created a generalized linear model to study the growth of OOP spending over time. RESULTS In the multivariate model, case categories (craniotomy for pain, tumor, and vascular lesions), commercial insurance, and out-of-network plans were significant predictors of higher OOP payments for patients (all p < 0.05). Patient spending varied substantially across procedure types, with patients undergoing craniotomy for pain ($1151 ± $209) having the highest mean OOP payments. On average, commercially insured patients spent nearly twice as much in OOP payments as the overall population. From 2013 to 2016, the mean patient OOP spending increased 17%, from $598 to $698 per patient encounter. Commercially insured patients experienced more significant growth in OOP spending, with a cumulative rate of growth of 42% ($991 in 2013 to $1403 in 2016). CONCLUSIONS Even after controlling for inflation, case-mix differences, and partial fiscal periods, OOP spending for cranial neurosurgery patients significantly increased from 2013 to 2016. The mean OOP spending for commercially insured neurosurgical patients exceeded $1400 in 2016, with an average annual growth rate of 13%. As patient cost sharing in health insurance plans becomes more prevalent, patients and providers must consider the potential financial burden for patients receiving specialized neurosurgical care.

  11. Meta-Analysis Comparing Complete Revascularization Versus Infarct-Related Only Strategies for Patients With ST-Segment Elevation Myocardial Infarction and Multivessel Coronary Artery Disease.

    PubMed

    Shah, Rahman; Berzingi, Chalak; Mumtaz, Mubashir; Jasper, John B; Goswami, Rohan; Morsy, Mohamed S; Ramanathan, Kodangudi B; Rao, Sunil V

    2016-11-15

    Several recent randomized controlled trials (RCTs) demonstrated better outcomes with multivessel complete revascularization (CR) than with infarct-related artery-only revascularization (IRA-OR) in patients with ST-segment elevation myocardial infarction. It is unclear whether CR should be performed during the index procedure (IP) at the time of primary percutaneous coronary intervention (PCI) or as a staged procedure (SP). Therefore, we performed a pairwise meta-analysis using a random-effects model and network meta-analysis using mixed-treatment comparison models to compare the efficacies of 3 revascularization strategies (IRA-OR, CR-IP, and CR-SP). Scientific databases and websites were searched to find RCTs. Data from 9 RCTs involving 2,176 patients were included. In mixed-comparison models, CR-IP decreased the risk of major adverse cardiac events (MACEs; odds ratio [OR] 0.36, 95% CI 0.25 to 0.54), recurrent myocardial infarction (MI; OR 0.50, 95% CI 0.24 to 0.91), revascularization (OR 0.24, 95% CI 0.15 to 0.38), and cardiovascular (CV) mortality (OR 0.44, 95% CI 0.20 to 0.87). However, only the rates of MACEs, MI, and CV mortality were lower with CR-SP than with IRA-OR. Similarly, in direct-comparison meta-analysis, the risk of MI was 66% lower with CR-IP than with IRA-OR, but this advantage was not seen with CR-SP. There were no differences in all-cause mortality between the 3 revascularization strategies. In conclusion, this meta-analysis shows that in patients with ST-segment elevation myocardial infarction and multivessel coronary artery disease, CR either during primary PCI or as an SP results in lower occurrences of MACE, revascularization, and CV mortality than IRA-OR. CR performed during primary PCI also results in lower rates of recurrent MI and seems the most efficacious revascularization strategy of the 3. Published by Elsevier Inc.

  12. Endoscopic Histologic Mapping of a Mixed Germ Pineal Tumor.

    PubMed

    Velásquez, Carlos; Rivero-Garvía, Mónica; Rivas, Eloy; Cañizares, María de Los Angeles; Mayorga-Buiza, María José; Márquez-Rivas, Javier

    2016-11-01

    The accurate histologic diagnosis of germ cell tumors in the pineal region is a keystone for determining the best treatment strategy and prognosis. This situation poses a challenge for the neuropathologist, considering the lack of a standarized procedure to obtain biopsy samples, which results in few and small specimens, which are not suitable for diagnosis. We report a case in which a pineal region mixed germ cell tumor was accurately diagnosed by performing histologic mapping through a dual burr-hole endoscopic approach. The technical pitfalls and other considerations necessary for obtaining an accurate diagnosis in this tumor subgroup are specified. In addition, the histologic analysis regarding the sampling technique used is described. The supraorbital frontal endoscopic approach enables the surgeon to perform histologic mapping of pineal region tumors, allowing standarization of the procedure used to obtain the specimens. This approach could result in a more accurate diagnosis, especially in mixed germ cell neoplasms. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. High speed jet noise research at NASA Lewis

    NASA Astrophysics Data System (ADS)

    Krejsa, Eugene A.; Cooper, B. A.; Kim, C. M.; Khavaran, Abbas

    1992-04-01

    The source noise portion of the High Speed Research Program at NASA LeRC is focused on jet noise reduction. A number of jet noise reduction concepts are being investigated. These include two concepts, the Pratt & Whitney ejector suppressor nozzle and the General Electric (GE) 2D-CD mixer ejector nozzle, that rely on ejectors to entrain significant amounts of ambient air to mix with the engine exhaust to reduce the final exhaust velocity. Another concept, the GE 'Flade Nozzle' uses fan bypass air at takeoff to reduce the mixed exhaust velocity and to create a fluid shield around a mixer suppressor. Additional concepts are being investigated at Georgia Tech Research Institute and at NASA LeRC. These will be discussed in more detail in later figures. Analytical methods for jet noise prediction are also being developed. Efforts in this area include upgrades to the GE MGB jet mixing noise prediction procedure, evaluation of shock noise prediction procedures, and efforts to predict jet noise directly from the unsteady Navier-Stokes equation.

  14. High speed jet noise research at NASA Lewis

    NASA Technical Reports Server (NTRS)

    Krejsa, Eugene A.; Cooper, B. A.; Kim, C. M.; Khavaran, Abbas

    1992-01-01

    The source noise portion of the High Speed Research Program at NASA LeRC is focused on jet noise reduction. A number of jet noise reduction concepts are being investigated. These include two concepts, the Pratt & Whitney ejector suppressor nozzle and the General Electric (GE) 2D-CD mixer ejector nozzle, that rely on ejectors to entrain significant amounts of ambient air to mix with the engine exhaust to reduce the final exhaust velocity. Another concept, the GE 'Flade Nozzle' uses fan bypass air at takeoff to reduce the mixed exhaust velocity and to create a fluid shield around a mixer suppressor. Additional concepts are being investigated at Georgia Tech Research Institute and at NASA LeRC. These will be discussed in more detail in later figures. Analytical methods for jet noise prediction are also being developed. Efforts in this area include upgrades to the GE MGB jet mixing noise prediction procedure, evaluation of shock noise prediction procedures, and efforts to predict jet noise directly from the unsteady Navier-Stokes equation.

  15. Correlation between the Hurst exponent and the maximal Lyapunov exponent: Examining some low-dimensional conservative maps

    NASA Astrophysics Data System (ADS)

    Tarnopolski, Mariusz

    2018-01-01

    The Chirikov standard map and the 2D Froeschlé map are investigated. A few thousand values of the Hurst exponent (HE) and the maximal Lyapunov exponent (mLE) are plotted in a mixed space of the nonlinear parameter versus the initial condition. Both characteristic exponents reveal remarkably similar structures in this space. A tight correlation between the HEs and mLEs is found, with the Spearman rank ρ = 0 . 83 and ρ = 0 . 75 for the Chirikov and 2D Froeschlé maps, respectively. Based on this relation, a machine learning (ML) procedure, using the nearest neighbor algorithm, is performed to reproduce the HE distribution based on the mLE distribution alone. A few thousand HE and mLE values from the mixed spaces were used for training, and then using 2 - 2 . 4 × 105 mLEs, the HEs were retrieved. The ML procedure allowed to reproduce the structure of the mixed spaces in great detail.

  16. Models for nearly every occasion: Part III - One box decreasing emission models.

    PubMed

    Hewett, Paul; Ganser, Gary H

    2017-11-01

    New one box "well-mixed room" decreasing emission (DE) models are introduced that allow for local exhaust or local exhaust with filtered return, as well the recirculation of a filtered (or cleaned) portion of the general room ventilation. For each control device scenario, a steady state and transient model is presented. The transient equations predict the concentration at any time t after the application of a known mass of a volatile substance to a surface, and can be used to predict the task exposure profile, the average task exposure, as well as peak and short-term exposures. The steady state equations can be used to predict the "average concentration per application" that is reached whenever the substance is repeatedly applied. Whenever the beginning and end concentrations are expected to be zero (or near zero) the steady state equations can also be used to predict the average concentration for a single task with multiple applications during the task, or even a series of such tasks. The transient equations should be used whenever these criteria cannot be met. A structured calibration procedure is proposed that utilizes a mass balance approach. Depending upon the DE model selected, one or more calibration measurements are collected. Using rearranged versions of the steady state equations, estimates of the model variables-e.g., the mass of the substance applied during each application, local exhaust capture efficiency, and the various cleaning or filtration efficiencies-can be calculated. A new procedure is proposed for estimating the emission rate constant.

  17. 75 FR 37311 - Airplane and Engine Certification Requirements in Supercooled Large Drop, Mixed Phase, and Ice...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-29

    ... maximum time interval between any engine run-ups from idle and the minimum ambient temperature associated with that run-up interval. This limitation is necessary because we do not currently have any specific requirements for run-up procedures for engine ground operation in icing conditions. The engine run-up procedure...

  18. A flavor symmetry model for bilarge leptonic mixing and the lepton masses

    NASA Astrophysics Data System (ADS)

    Ohlsson, Tommy; Seidl, Gerhart

    2002-11-01

    We present a model for leptonic mixing and the lepton masses based on flavor symmetries and higher-dimensional mass operators. The model predicts bilarge leptonic mixing (i.e., the mixing angles θ12 and θ23 are large and the mixing angle θ13 is small) and an inverted hierarchical neutrino mass spectrum. Furthermore, it approximately yields the experimental hierarchical mass spectrum of the charged leptons. The obtained values for the leptonic mixing parameters and the neutrino mass squared differences are all in agreement with atmospheric neutrino data, the Mikheyev-Smirnov-Wolfenstein large mixing angle solution of the solar neutrino problem, and consistent with the upper bound on the reactor mixing angle. Thus, we have a large, but not close to maximal, solar mixing angle θ12, a nearly maximal atmospheric mixing angle θ23, and a small reactor mixing angle θ13. In addition, the model predicts θ 12≃ {π}/{4}-θ 13.

  19. Cold fronts and reservoir limnology: an integrated approach towards the ecological dynamics of freshwater ecosystems.

    PubMed

    Tundisi, J G; Matsumura-Tundisi, T; Pereira, K C; Luzia, A P; Passerini, M D; Chiba, W A C; Morais, M A; Sebastien, N Y

    2010-10-01

    In this paper the authors discuss the effects of cold fronts on the dynamics of freshwater ecosystems of southeast South America. Cold fronts originating from the Antarctic show a monthly frequency that promotes turbulence and vertical mixing in reservoirs with a consequence to homogenize nutrient distribution, dissolved oxygen and temperature. Weak thermoclines and the athelomixis process immediately before, during and after the passage of cold fronts interfere with phytoplankton succession in reservoirs. Cyanobacteria blooms in eutrophic reservoirs are frequently connected with periods of stratification and stability of the water column. Cold fronts in the Amazon and Pantanal lakes may produce fish killings during the process of "friagem" associated mixing events. Further studies will try to implement a model to predict the impact of cold fronts and prepare management procedures in order to cope with cyanobacteria blooms during warm and stable water column periods. Changes in water quality of reservoirs are expected during circulation periods caused by cold fronts.

  20. Combining cationic and anionic mixed-mode sorbents in a single cartridge to extract basic and acidic pharmaceuticals simultaneously from environmental waters.

    PubMed

    Salas, Daniela; Borrull, Francesc; Fontanals, Núria; Marcé, Rosa Maria

    2018-01-01

    The aim of the present study is to broaden the applications of mixed-mode ion-exchange solid-phase extraction sorbents to extract both basic and acidic compounds simultaneously by combining the sorbents in a single cartridge and developing a simplified extraction procedure. Four different cartridges containing negative and positive charges in the same configuration were evaluated and compared to extract a group of basic, neutral, and acidic pharmaceuticals selected as model compounds. After a thorough optimization of the extraction conditions, the four different cartridges showed to be capable of retaining basic and acidic pharmaceuticals simultaneously through ionic interactions, allowing the introduction of a washing step with 15 mL methanol to eliminate interferences retained by hydrophobic interactions. Using the best combined cartridge, a method was developed, validated, and further applied to environmental waters to demonstrate that the method is promising for the extraction of basic and acidic compounds from very complex samples.

Top