Science.gov

Sample records for activation analysis procedure

  1. Advanced liquid and solid extraction procedures for ultratrace determination of rhenium by radiochemical neutron activation analysis

    NASA Astrophysics Data System (ADS)

    Mizera, J.; Kučera, J.; Řanda, Z.; Lučaníková, M.

    2006-01-01

    Radiochemical neutron activation analysis (RNAA) procedures for determination of Re at the ultratrace level based on use of liquid-liquid extraction (LLE) and extraction chromatography (EXC) have been developed. Two different LLE procedures were used depending on the way of sample decomposition using either 2-butanone or tetraphenylarsonium chloride in CHCl3. EXC employed new solid extractant materials prepared by incorporation of the liquid trioctyl-methyl-ammonium chloride into an inert polyacrylonitrile matrix. The RNAA procedures presented have been compared and applied for Re determination in several biological and environmental reference materials.

  2. A neutron activation analysis procedure for the determination of uranium, thorium and potassium in geologic samples

    USGS Publications Warehouse

    Aruscavage, P. J.; Millard, H.T.

    1972-01-01

    A neutron activation analysis procedure was developed for the determination of uranium, thorium and potassium in basic and ultrabasic rocks. The three elements are determined in the same 0.5-g sample following a 30-min irradiation in a thermal neutron flux of 2??1012 n??cm-2??sec-1. Following radiochemical separation, the nuclides239U (T=23.5 m),233Th (T=22.2 m) and42K (T=12.36 h) are measured by ??-counting. A computer program is used to resolve the decay curves which are complex owing to contamination and the growth of daughter activities. The method was used to determine uranium, throium and potassium in the U. S. Geological Survey standard rocks DTS-1, PCC-1 and BCR-1. For 0.5-g samples the limits of detection for uranium, throium and potassium are 0.7, 1.0 and 10 ppb, respectively. ?? 1972 Akade??miai Kiado??.

  3. Application of statistical process control and process capability analysis procedures in orbiter processing activities at the Kennedy Space Center

    NASA Technical Reports Server (NTRS)

    Safford, Robert R.; Jackson, Andrew E.; Swart, William W.; Barth, Timothy S.

    1994-01-01

    Successful ground processing at KSC requires that flight hardware and ground support equipment conform to specifications at tens of thousands of checkpoints. Knowledge of conformance is an essential requirement for launch. That knowledge of conformance at every requisite point does not, however, enable identification of past problems with equipment, or potential problem areas. This paper describes how the introduction of Statistical Process Control and Process Capability Analysis identification procedures into existing shuttle processing procedures can enable identification of potential problem areas and candidates for improvements to increase processing performance measures. Results of a case study describing application of the analysis procedures to Thermal Protection System processing are used to illustrate the benefits of the approaches described in the paper.

  4. Cost analysis for procedure comparisons.

    PubMed

    Trowers, E A; Batra, S C; Buessler, J; Anderson, L K

    1995-01-01

    Using the methodology of activity-based costing as a conceptual framework, the authors present the potential cost reduction of a new office routine and a medical procedure. The costs of a new instrument for colorectal cancer screening and a new surveying and follow-up of at-risk patients show that time and relevant costs in the G.I Clinic and G.I Endoscopy Lab were significantly reduced.

  5. Disclosure analysis procedures: reliability issues.

    PubMed

    Hux, K; Sanger, D; Reid, R; Maschka, A

    1997-01-01

    Performing disclosure analyses to supplement assessment procedures and facilitate intervention planning is only valuable if the observations are reliable. The purpose of the present study was to evaluate and compare four methods of assessing reliability on one discourse analysis procedure--a modified version of Damico's Clinical Discourse Analysis (1985a, 1985b, 1992). The selected methods were: (a) Pearson product-moment correlations, (b) interobserver agreement, (c) Cohen's kappa, and (d) generalizability coefficients. Results showed high correlation coefficients and high percentages of interobserver agreement when error type was not taken into account. However, interobserver agreement percentages obtained solely for target behavior occurrences and Cohen's kappa revealed that much of the agreement between rates was due to chance and the high frequency of target behavior non-occurrence. Generalizability coefficients revealed that the procedure was fair to good for discriminating among persons with differing levels of language competency for some aspects of communication performance but was less than desirable for others; the aggregate score was below recommended standards for differentiating among people for diagnostic purposes.

  6. Student Activity Funds: Procedures & Controls.

    ERIC Educational Resources Information Center

    Cuzzetto, Charles E.

    Student activity funds may create educational opportunities for students, but they frequently create problems for business administrators. The first part of this work reviews the types of organizational issues and transactions an organized student group is likely to encounter, including establishing a constitution, participant roles,…

  7. Student Activity Funds: Procedures and Controls.

    ERIC Educational Resources Information Center

    Cuzzetto, Charles E.

    2000-01-01

    An effective internal-control system can help school business administrators meet the challenges of accounting for student activity funds. Such a system should include appropriate policies and procedures, identification of key control points, self-assessments, audit trails, and internal and external audits. (MLH)

  8. DSOD Procedures for Seismic Hazard Analysis

    NASA Astrophysics Data System (ADS)

    Howard, J. K.; Fraser, W. A.

    2005-12-01

    shaking levels are provided for sites far from active faulting. Our procedures and standards are presented at the DSOD website http://damsafety.water.ca.gov/. We review our methods and tools periodically under the guidance of our Consulting Board for Earthquake Analysis (and expect to make changes pending NGA completion), mindful that frequent procedural changes can interrupt design evaluations.

  9. Procedures for numerical analysis of circadian rhythms

    PubMed Central

    REFINETTI, ROBERTO; LISSEN, GERMAINE CORNÉ; HALBERG, FRANZ

    2010-01-01

    This article reviews various procedures used in the analysis of circadian rhythms at the populational, organismal, cellular and molecular levels. The procedures range from visual inspection of time plots and actograms to several mathematical methods of time series analysis. Computational steps are described in some detail, and additional bibliographic resources and computer programs are listed. PMID:23710111

  10. HUMAN RELIABILITY ANALYSIS FOR COMPUTERIZED PROCEDURES

    SciTech Connect

    Ronald L. Boring; David I. Gertman; Katya Le Blanc

    2011-09-01

    This paper provides a characterization of human reliability analysis (HRA) issues for computerized procedures in nuclear power plant control rooms. It is beyond the scope of this paper to propose a new HRA approach or to recommend specific methods or refinements to those methods. Rather, this paper provides a review of HRA as applied to traditional paper-based procedures, followed by a discussion of what specific factors should additionally be considered in HRAs for computerized procedures. Performance shaping factors and failure modes unique to computerized procedures are highlighted. Since there is no definitive guide to HRA for paper-based procedures, this paper also serves to clarify the existing guidance on paper-based procedures before delving into the unique aspects of computerized procedures.

  11. Activation analysis

    SciTech Connect

    Alfassi, Z.B. . Dept. of Nuclear Engineering)

    1990-01-01

    This volume contains 16 chapters on the application of activation analysis in the fields of life sciences, biological materials, coal and its effluents, environmental samples, archaeology, material science, and forensics. Each chapter is processed separately for the data base.

  12. Characteristics, Procedures, and Results of Two Job Analysis Techniques.

    ERIC Educational Resources Information Center

    Burnett, Michael F.; McCracken, J. David

    1982-01-01

    This article describes and compares two job analysis procedures, task inventory analysis and Position Analysis Questionnaire. It provides comparisons in terms of the characteristics of, the activities involved in, and the results derived from a study utilizing each of the techniques. (Author/CT)

  13. A procedural analysis of correspondence training techniques

    PubMed Central

    Paniagua, Freddy A.

    1990-01-01

    A variety of names have been given to procedures used in correspondence training, some more descriptive than others. In this article I argue that a terminology more accurately describing actual procedures, rather than the conceptual function that those procedures are assumed to serve, would benefit the area of correspondence training. I identify two documented procedures during the reinforcement of verbalization phase and five procedures during the reinforcement of correspondence phase and suggest that those procedures can be classified, or grouped into nonoverlapping categories, by specifying the critical dimensions of those procedures belonging to a single category. I suggest that the names of such nonoverlapping categories should clearly specify the dimensions on which the classification is based in order to facilitate experimental comparison of procedures, and to be able to recognize when a new procedure (as opposed to a variant of one already in existence) is developed. Future research involving comparative analysis across and within procedures is discussed within the framework of the proposed classification. PMID:22478059

  14. An analysis of aircrew procedural compliance

    NASA Technical Reports Server (NTRS)

    Schofield, J. E.; Giffin, W. C.

    1981-01-01

    This research examines the relationships between aircrew compliance with procedures and operator errors. The data for this analysis were generated by reexamination of a 1976 experiment in full mission simulation conducted by Dr. H. P. Ruffell Smith (1979) for the NASA-Ames Research Center. The character of individual operators, the chemistry of crew composition, and complex aspects of the operational environment affected procedural compliance by crew members. Associations between enumerated operator errors and several objective indicators of crew coordination were investigated. The correspondence among high operator error counts and infrequent compliance with specific crew coordination requirements was most notable when copilots were accountable for control of flight parameters.

  15. Operational Control Procedures for the Activated Sludge Process, Part III-A: Calculation Procedures.

    ERIC Educational Resources Information Center

    West, Alfred W.

    This is the second in a series of documents developed by the National Training and Operational Technology Center describing operational control procedures for the activated sludge process used in wastewater treatment. This document deals exclusively with the calculation procedures, including simplified mixing formulas, aeration tank…

  16. Neutron Activation Analysis: Techniques and Applications

    NASA Astrophysics Data System (ADS)

    MacLellan, Ryan

    2011-04-01

    The role of neutron activation analysis in low-energy low-background experimentsis discussed in terms of comparible methods. Radiochemical neutron activation analysis is introduce. The procedure of instrumental neutron activation analysis is detailed especially with respect to the measurement of trace amounts of natural radioactivity. The determination of reactor neutron spectrum parameters required for neutron activation analysis is also presented.

  17. Neutron Activation Analysis: Techniques and Applications

    SciTech Connect

    MacLellan, Ryan

    2011-04-27

    The role of neutron activation analysis in low-energy low-background experimentsis discussed in terms of comparible methods. Radiochemical neutron activation analysis is introduce. The procedure of instrumental neutron activation analysis is detailed especially with respect to the measurement of trace amounts of natural radioactivity. The determination of reactor neutron spectrum parameters required for neutron activation analysis is also presented.

  18. Summative Mass Analysis of Algal Biomass - Integration of Analytical Procedures: Laboratory Analytical Procedure (LAP)

    SciTech Connect

    Laurens, L. M. L.

    2013-12-01

    This procedure guides the integration of laboratory analytical procedures to measure algal biomass constituents in an unambiguous manner and ultimately achieve mass balance closure for algal biomass samples. Many of these methods build on years of research in algal biomass analysis.

  19. Keystroke Analysis: Reflections on Procedures and Measures

    ERIC Educational Resources Information Center

    Baaijen, Veerle M.; Galbraith, David; de Glopper, Kees

    2012-01-01

    Although keystroke logging promises to provide a valuable tool for writing research, it can often be difficult to relate logs to underlying processes. This article describes the procedures and measures that the authors developed to analyze a sample of 80 keystroke logs, with a view to achieving a better alignment between keystroke-logging measures…

  20. Operational Control Procedures for the Activated Sludge Process: Appendix.

    ERIC Educational Resources Information Center

    West, Alfred W.

    This document is the appendix for a series of documents developed by the National Training and Operational Technology Center describing operational control procedures for the activated sludge process used in wastewater treatment. Categories discussed include: control test data, trend charts, moving averages, semi-logarithmic plots, probability…

  1. Manual of Alternative Procedures: Activities of Daily Living.

    ERIC Educational Resources Information Center

    McCormack, James E.; And Others

    Intended for teachers and others providing services for moderately and severely physically and/or mentally handicapped children and young adults, the manual presents strategies, procedures, and task analyses for training in daily living skills. Section I provides an overview of tactics for teaching activities of daily living (ADL) skills,…

  2. CHEMICALLY ACTIVATED LUCIFASE GENE EXPRESSION (CALUX) CELL BIOASSAY ANALYSIS FOR THE ESTIMATION OF DIOXIN-LIKE ACTIVITIY: CRITICAL PARAMETERS OF THE CALUX PROCEDURE THAT IMPACT ASSAY RESULTS

    EPA Science Inventory

    The Chemically Activated Luciferase gene expression (CALUX) in vitro cell bioassay is an emerging bioanalytical tool that is increasingly being used for the screening and relative quantification of dioxins and dioxin-like compounds. Since CALUX analyses provide a biological respo...

  3. Analysis procedure for americium in environmental samples

    SciTech Connect

    Holloway, R.W.; Hayes, D.W.

    1982-01-01

    Several methods for the analysis of /sup 241/Am in environmental samples were evaluated and a preferred method was selected. This method was modified and used to determine the /sup 241/Am content in sediments, biota, and water. The advantages and limitations of the method are discussed. The method is also suitable for /sup 244/Cm analysis.

  4. An Improved Qualitative Analysis Procedure for Aluminum Subgroup Cations.

    ERIC Educational Resources Information Center

    Kistner, C. R.; Robinson, Patricia J.

    1983-01-01

    Describes a procedure for the qualitative analysis of aluminum subgroup cations designed to avoid failure to obtain lead or barium chromate precipitates or failure to report aluminum hydroxide when present (due to staining). Provides a flow chart and step-by-step explanation for the new procedure, indicating significantly improved student results.…

  5. Refining Procedures: A Needs Analysis Project at Kuwait University.

    ERIC Educational Resources Information Center

    Basturkmen, Helen

    1998-01-01

    Outlines the procedures followed in the needs analysis (NA) project carried out in 1996 at the College of Petroleum and Engineering at Kuwait University. Focuses on the steps taken in the project and the rationale behind them. Offers an illustration of an NA project and to show the procedural steps involved. (Author/VWL)

  6. Building America Performance Analysis Procedures: Revision 1

    SciTech Connect

    2004-06-01

    To measure progress toward multi-year research goals, cost and performance trade-offs are evaluated through a series of controlled field and laboratory experiments supported by energy analysis techniques using test data to calibrate simulation models.

  7. Procedure for analysis of nickel-cadmium cell materials

    NASA Technical Reports Server (NTRS)

    Halpert, G.; Ogunyankin, O.; Jones, C.

    1973-01-01

    Quality control procedures include analyses on electrolyte, active materials, and separators for nickel cadmium cell materials. Tests range from the visual/mechanical inspection of cells to gas sampling, electrolyte extract, electrochemical tests, and physical measurements.

  8. Umbilical Hernia Repair: Analysis After 934 Procedures.

    PubMed

    Porrero, José L; Cano-Valderrama, Oscar; Marcos, Alberto; Bonachia, Oscar; Ramos, Beatriz; Alcaide, Benito; Villar, Sol; Sánchez-Cabezudo, Carlos; Quirós, Esther; Alonso, María T; Castillo, María J

    2015-09-01

    There is a lack of consensus about the surgical management of umbilical hernias. The aim of this study is to analyze the medium-term results of 934 umbilical hernia repairs. In this study, 934 patients with an umbilical hernia underwent surgery between 2004 and 2010, 599 (64.1%) of which were evaluated at least one year after the surgery. Complications, recurrence, and the reoperation rate were analyzed. Complications were observed in 5.7 per cent of the patients. With a mean follow-up time of 35.5 months, recurrence and reoperation rates were 3.8 per cent and 4.7 per cent, respectively. A higher percentage of female patients (60.9 % vs 29 %, P = 0.001) and a longer follow-up time (47.4 vs 35 months, P = 0.037) were observed in patients who developed a recurrence. No significant differences were observed between complications and the reoperation rate in patients who underwent Ventralex(®) preperitoneal mesh reinforcement and suture repair; however, a trend toward a higher recurrence rate was observed in patients with suture repair (6.5 % vs 3.2 %, P = 0.082). Suture repair had lower recurrence and reoperation rates in patients with umbilical hernias less than 1 cm. Suture repair is an appropriate procedure for small umbilical hernias; however, for larger umbilical hernias, mesh reinforcement should be considered.

  9. A Composite of Order Analysis Procedures.

    ERIC Educational Resources Information Center

    Druva, Cynthia Ann

    Order Analysis is a multidimensional scaling (MDS) technique for determining order among items. This paper reviews articles by different authors describing various components of ordering theory. A common nomenclature is constructed to link together the various ideas and is applied to a fairly simple set of data. Topics discussed include a more…

  10. Mokken Scale Analysis Using Hierarchical Clustering Procedures

    ERIC Educational Resources Information Center

    van Abswoude, Alexandra A. H.; Vermunt, Jeroen K.; Hemker, Bas T.; van der Ark, L. Andries

    2004-01-01

    Mokken scale analysis (MSA) can be used to assess and build unidimensional scales from an item pool that is sensitive to multiple dimensions. These scales satisfy a set of scaling conditions, one of which follows from the model of monotone homogeneity. An important drawback of the MSA program is that the sequential item selection and scale…

  11. Operating procedures: Fusion Experiments Analysis Facility

    SciTech Connect

    Lerche, R.A.; Carey, R.W.

    1984-03-20

    The Fusion Experiments Analysis Facility (FEAF) is a computer facility based on a DEC VAX 11/780 computer. It became operational in late 1982. At that time two manuals were written to aid users and staff in their interactions with the facility. This manual is designed as a reference to assist the FEAF staff in carrying out their responsibilities. It is meant to supplement equipment and software manuals supplied by the vendors. Also this manual provides the FEAF staff with a set of consistent, written guidelines for the daily operation of the facility.

  12. Re-analysis procedure based on the mixed formulation

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Peters, Jeanne M.

    1987-01-01

    An analysis procedure is presented for large-scale structural systems, with large numbers of degrees of freedom and design variables. The procedure uses a mixed formulation with the fundamental unknowns consisting of both stress and displacement parameters. Other elements of the procedure include: (1) lumping the design variables into a single tracing parameter; (2) operator splitting or additive decomposition of different arrays in the finite element equations into the corresponding arrays of the original structure plus correction terms; and (3) application of a reduction method through the use of the finite element method and the classical Bubnov-Galerkin technique. The re-analysis procedure is applied to the linear static and free vibration problems of plate and shell structures.

  13. Current Human Reliability Analysis Methods Applied to Computerized Procedures

    SciTech Connect

    Ronald L. Boring

    2012-06-01

    Computerized procedures (CPs) are an emerging technology within nuclear power plant control rooms. While CPs have been implemented internationally in advanced control rooms, to date no US nuclear power plant has implemented CPs in its main control room (Fink et al., 2009). Yet, CPs are a reality of new plant builds and are an area of considerable interest to existing plants, which see advantages in terms of enhanced ease of use and easier records management by omitting the need for updating hardcopy procedures. The overall intent of this paper is to provide a characterization of human reliability analysis (HRA) issues for computerized procedures. It is beyond the scope of this document to propose a new HRA approach or to recommend specific methods or refinements to those methods. Rather, this paper serves as a review of current HRA as it may be used for the analysis and review of computerized procedures.

  14. A two-step procedure of fractal analysis

    NASA Astrophysics Data System (ADS)

    Dedovich, T. G.; Tokarev, M. V.

    2016-03-01

    A two-step procedure for the analysis of different-type fractals is proposed for the PaC and SePaC methods. An advantage of the two-step procedures of the PaC and SePaC methods over the basic and modified PaC and SePaC methods is shown. Results of comparative analysis of the unified data set using different approaches (the BC method and two-step procedures of the PaC and SePaC methods) are given. It is shown that the two-step procedure of the SePaC method is most efficient in reconstructing the overall data set.

  15. 32 CFR 989.37 - Procedures for analysis abroad.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... for analysis of environmental actions abroad are contained in 32 CFR part 187. That directive provides... actions abroad, 32 CFR part 187 will be followed. ... 32 National Defense 6 2011-07-01 2011-07-01 false Procedures for analysis abroad. 989.37...

  16. 40 CFR 246.202-6 - Recommended procedures: Cost analysis.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Residential, Commercial and Institutional Solid Wastes (40 CFR part 243) and Thermal Processing and Land Disposal Guidelines (40 CFR parts 240 and 241) should be included in the analysis. This cost analysis... 40 Protection of Environment 26 2012-07-01 2011-07-01 true Recommended procedures: Cost...

  17. 40 CFR 246.202-6 - Recommended procedures: Cost analysis.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Residential, Commercial and Institutional Solid Wastes (40 CFR part 243) and Thermal Processing and Land Disposal Guidelines (40 CFR parts 240 and 241) should be included in the analysis. This cost analysis... 40 Protection of Environment 25 2011-07-01 2011-07-01 false Recommended procedures: Cost...

  18. Accident Sequence Evaluation Program: Human reliability analysis procedure

    SciTech Connect

    Swain, A.D.

    1987-02-01

    This document presents a shortened version of the procedure, models, and data for human reliability analysis (HRA) which are presented in the Handbook of Human Reliability Analysis With emphasis on Nuclear Power Plant Applications (NUREG/CR-1278, August 1983). This shortened version was prepared and tried out as part of the Accident Sequence Evaluation Program (ASEP) funded by the US Nuclear Regulatory Commission and managed by Sandia National Laboratories. The intent of this new HRA procedure, called the ''ASEP HRA Procedure,'' is to enable systems analysts, with minimal support from experts in human reliability analysis, to make estimates of human error probabilities and other human performance characteristics which are sufficiently accurate for many probabilistic risk assessments. The ASEP HRA Procedure consists of a Pre-Accident Screening HRA, a Pre-Accident Nominal HRA, a Post-Accident Screening HRA, and a Post-Accident Nominal HRA. The procedure in this document includes changes made after tryout and evaluation of the procedure in four nuclear power plants by four different systems analysts and related personnel, including human reliability specialists. The changes consist of some additional explanatory material (including examples), and more detailed definitions of some of the terms. 42 refs.

  19. A procedure to estimate proximate analysis of mixed organic wastes.

    PubMed

    Zaher, U; Buffiere, P; Steyer, J P; Chen, S

    2009-04-01

    In waste materials, proximate analysis measuring the total concentration of carbohydrate, protein, and lipid contents from solid wastes is challenging, as a result of the heterogeneous and solid nature of wastes. This paper presents a new procedure that was developed to estimate such complex chemical composition of the waste using conventional practical measurements, such as chemical oxygen demand (COD) and total organic carbon. The procedure is based on mass balance of macronutrient elements (carbon, hydrogen, nitrogen, oxygen, and phosphorus [CHNOP]) (i.e., elemental continuity), in addition to the balance of COD and charge intensity that are applied in mathematical modeling of biological processes. Knowing the composition of such a complex substrate is crucial to study solid waste anaerobic degradation. The procedure was formulated to generate the detailed input required for the International Water Association (London, United Kingdom) Anaerobic Digestion Model number 1 (IWA-ADM1). The complex particulate composition estimated by the procedure was validated with several types of food wastes and animal manures. To make proximate analysis feasible for validation, the wastes were classified into 19 types to allow accurate extraction and proximate analysis. The estimated carbohydrates, proteins, lipids, and inerts concentrations were highly correlated to the proximate analysis; correlation coefficients were 0.94, 0.88, 0.99, and 0.96, respectively. For most of the wastes, carbohydrate was the highest fraction and was estimated accurately by the procedure over an extended range with high linearity. For wastes that are rich in protein and fiber, the procedure was even more consistent compared with the proximate analysis. The new procedure can be used for waste characterization in solid waste treatment design and optimization.

  20. Procedural Fidelity: An Analysis of Measurement and Reporting Practices

    ERIC Educational Resources Information Center

    Ledford, Jennifer R.; Wolery, Mark

    2013-01-01

    A systematic analysis was conducted of measurement and reporting practices related to procedural fidelity in single-case research for the past 30 years. Previous reviews of fidelity primarily reported whether fidelity data were collected by authors; these reviews reported that collection was variable, but low across journals and over time. Results…

  1. Research Methods and Data Analysis Procedures Used by Educational Researchers

    ERIC Educational Resources Information Center

    Hsu, Tse-chi

    2005-01-01

    To assess the status and the trends of subject matters investigated and research methods/designs and data analysis procedures employed by educational researchers, this study surveyed articles published by the "American Educational Research Journal (AERJ)," "Journal of Experimental Education (JEE)" and "Journal of Educational Research (JER)" from…

  2. 40 CFR 246.201-7 - Recommended procedures: Cost analysis.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Residential, Commercial and Institutional Solid Wastes (40 CFR part 243) and Thermal Processing and Land Disposal Guidelines (40 CFR parts 240 and 241) should be included in the analysis. In formulating a... 40 Protection of Environment 26 2012-07-01 2011-07-01 true Recommended procedures: Cost...

  3. 40 CFR 246.200-8 - Recommended procedures: Cost analysis.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Collection of Residential, Commercial and Institutional Solid Wastes (40 CFR part 243) and Thermal Processing and Land Disposal Guidelines (40 CFR parts 240 and 241) should be included in the analysis. In... 40 Protection of Environment 26 2012-07-01 2011-07-01 true Recommended procedures: Cost...

  4. 40 CFR 246.200-8 - Recommended procedures: Cost analysis.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Collection of Residential, Commercial and Institutional Solid Wastes (40 CFR part 243) and Thermal Processing and Land Disposal Guidelines (40 CFR parts 240 and 241) should be included in the analysis. In... 40 Protection of Environment 25 2011-07-01 2011-07-01 false Recommended procedures: Cost...

  5. 40 CFR 246.201-7 - Recommended procedures: Cost analysis.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Residential, Commercial and Institutional Solid Wastes (40 CFR part 243) and Thermal Processing and Land Disposal Guidelines (40 CFR parts 240 and 241) should be included in the analysis. In formulating a... 40 Protection of Environment 25 2011-07-01 2011-07-01 false Recommended procedures: Cost...

  6. Task analysis method for procedural training curriculum development.

    PubMed

    Riggle, Jakeb D; Wadman, Michael C; McCrory, Bernadette; Lowndes, Bethany R; Heald, Elizabeth A; Carstens, Patricia K; Hallbeck, M Susan

    2014-06-01

    A central venous catheter (CVC) is an important medical tool used in critical care and emergent situations. Integral to proper care in many circumstances, insertion of a CVC introduces the risk of central line-associated blood stream infections and mechanical adverse events; proper training is important for safe CVC insertion. Cognitive task analysis (CTA) methods have been successfully implemented in the medical field to improve the training of postgraduate medical trainees, but can be very time-consuming to complete and require a significant time commitment from many subject matter experts (SMEs). Many medical procedures such as CVC insertion are linear processes with well-documented procedural steps. These linear procedures may not require a traditional CTA to gather the information necessary to create a training curriculum. Accordingly, a novel, streamlined CTA method designed primarily to collect cognitive cues for linear procedures was developed to be used by medical professionals with minimal CTA training. This new CTA methodology required fewer trained personnel, fewer interview sessions, and less time commitment from SMEs than a traditional CTA. Based on this study, a streamlined CTA methodology can be used to efficiently gather cognitive information on linear medical procedures for the creation of resident training curricula and procedural skills assessments. PMID:24366759

  7. [Cost analysis of twenty-nine nuclear medicine procedures].

    PubMed

    Kastanioti, Catherine K; Alphalbouharali, Gihand; Fotopoulos, Andreas

    2004-01-01

    The aim of this study was to compare actual cost estimates for diagnostic procedures as applied in the nuclear medicine department of our University Hospital, with cost estimates obtained through an analytical activity-based costing methodology. Activity data on the use of twenty-nine nuclear medicine procedures were collected. The actual hospital prices for the fiscal years of 2003-2004 were obtained from the Accounting Department of the Hospital. Cost estimates were calculated per patient. Activity-based data were compared with hospital prices and also with unit costs from the activity-based costing methodology. Our results showed a significant statistical difference between unit cost estimates per patient based on hospital prices, as compared with those based on unit costs. This study shows that in our university hospital, reliance on generic hospital prices for nuclear medicine procedures, considerable underestimates their real cost by a mean value of 40% as derived through the activity-based costing methodology and can lead to substantial financial hospital deficits.

  8. The use of analysis of variance procedures in biological studies

    USGS Publications Warehouse

    Williams, B.K.

    1987-01-01

    The analysis of variance (ANOVA) is widely used in biological studies, yet there remains considerable confusion among researchers about the interpretation of hypotheses being tested. Ambiguities arise when statistical designs are unbalanced, and in particular when not all combinations of design factors are represented in the data. This paper clarifies the relationship among hypothesis testing, statistical modelling and computing procedures in ANOVA for unbalanced data. A simple two-factor fixed effects design is used to illustrate three common parametrizations for ANOVA models, and some associations among these parametrizations are developed. Biologically meaningful hypotheses for main effects and interactions are given in terms of each parametrization, and procedures for testing the hypotheses are described. The standard statistical computing procedures in ANOVA are given along with their corresponding hypotheses. Throughout the development unbalanced designs are assumed and attention is given to problems that arise with missing cells.

  9. Jet Engine hot parts IR Analysis Procedure (J-EIRP)

    NASA Technical Reports Server (NTRS)

    Baumeister, Joseph F.

    1993-01-01

    A thermal radiation analysis method called Jet Engine IR Analysis Procedure (J-EIRP) was developed to evaluate jet engine cavity hot parts source radiation. The objectives behind J-EIRP were to achieve the greatest accuracy in model representation and solution, while minimizing computer resources and computational time. The computer programs that comprise J-EIRP were selected on the basis of their performance, accuracy, and flexibility to solve both simple and complex problems. These programs were intended for use on a personal computer, but include the ability to solve large problems on a mainframe or supercomputer. J-EIRP also provides the user with a tool for developing thermal design experience and engineering judgment through analysis experimentation, while using minimal computer resources. A sample jet engine cavity analysis demonstrates the procedure and capabilities within J-EIRP, and is compared to a simplified method for approximating cavity radiation. The goal is to introduce the terminology and solution process used in J-EIRP and to provide insight into the radiation heat transfer principles used in this procedure.

  10. Sampling procedure for the foliar analysis of deciduous trees.

    PubMed

    Luyssaert, Sebastiaan; Raitio, Hannu; Vervaeke, Pieter; Mertens, Jan; Lust, Noël

    2002-12-01

    Sampling can be the source of the greatest errors in the overall results of foliar analysis. This paper reviews the variability in heavy metal concentrations in tree crowns, which is a feature that should be known and understood when designing a suitable leaf sampling procedure. The leaf sampling procedures applied in 75 articles were examined. Most of the environmental studies used a closely related form of the UN/ECE-EC leaf sampling procedure, which was developed for the long-term monitoring of forest condition. Studies with objectives outside the UN/ECE-EC field of application should utilize a sampling procedure that is in accordance with the objectives of the study and based on the observed variation in pilot and similar studies. The inherent sources of heavy metal variability inside the stand, i.e. the crown class, stand management, site properties, crown dimensions, infections, seasons, etc. were discussed, but the underlying causes of this variability are rarely understood. The inherent variability in tree crowns is the reason for using leaf sampling as a tool in pollution studies. The objectives of a pollution study determine which sources of variability are utilized by the researcher.

  11. Dental procedures in children with severe congenital heart disease: a theoretical analysis of prophylaxis and non-prophylaxis procedures

    PubMed Central

    Al-Karaawi, Z; Lucas, V; Gelbier, M; Roberts, G

    2001-01-01

    OBJECTIVE—To estimate the cumulative exposure to bacteraemia from dental procedures currently recommended for antibiotic prophylaxis and compare this with cumulative exposure from dental procedures not recommended for prophylaxis.
DESIGN—Retrospective analysis.
SETTING—University and teaching hospital maxillofacial and dental department.
PATIENTS—136 children with severe congenital cardiac disease attending for dental treatment between 1993 and 1998 and for whom full records were available. Each dental procedure was tallied.
MAIN OUTCOME MEASURES—Cumulative exposure per annum to "non-prophylaxis procedures"; cumulative exposure per annum to "prophylaxis procedures".
RESULTS—Cumulative exposure to bacteraemia from prophylaxis procedures was not significantly greater than from non-prophylaxis procedures.
CONCLUSIONS—The data raise important questions about the appropriateness of current guidelines for antibiotic prophylaxis of bacterial endocarditis.


Keywords: congenital heart disease; dental treatment; cumulative risk; endocarditis PMID:11119466

  12. Chunking: a procedure to improve naturalistic data analysis.

    PubMed

    Dozza, Marco; Bärgman, Jonas; Lee, John D

    2013-09-01

    Every year, traffic accidents are responsible for more than 1,000,000 fatalities worldwide. Understanding the causes of traffic accidents and increasing safety on the road are priority issues for both legislators and the automotive industry. Recently, in Europe, the US and Japan, significant public funding has been allocated for performing large-scale naturalistic driving studies to better understand accident causation and the impact of safety systems on traffic safety. The data provided by these naturalistic driving studies has never been available before in this quantity and comprehensiveness and it promises to support a wide variety of data analyses. The volume and variety of the data also pose substantial challenges that demand new data reduction and analysis techniques. This paper presents a general procedure for the analysis of naturalistic driving data called chunking that can support many of these analyses by increasing their robustness and sensitivity. Chunking divides data into equivalent, elementary chunks of data to facilitate a robust and consistent calculation of parameters. This procedure was applied, as an example, to naturalistic driving data from the SeMiFOT study in Sweden and compared with alternative procedures from past studies in order to show its advantages and rationale in a specific example. Our results show how to apply the chunking procedure and how chunking can help avoid bias from data segments with heterogeneous durations (typically obtained from SQL queries). Finally, this paper shows how chunking can increase the robustness of parameter calculation, statistical sensitivity, and create a solid basis for further data analyses.

  13. Goals Analysis Procedure Guidelines for Applying the Goals Analysis Process

    NASA Technical Reports Server (NTRS)

    Motley, Albert E., III

    2000-01-01

    One of the key elements to successful project management is the establishment of the "right set of requirements", requirements that reflect the true customer needs and are consistent with the strategic goals and objectives of the participating organizations. A viable set of requirements implies that each individual requirement is a necessary element in satisfying the stated goals and that the entire set of requirements, taken as a whole, is sufficient to satisfy the stated goals. Unfortunately, it is the author's experience that during project formulation phases' many of the Systems Engineering customers do not conduct a rigorous analysis of the goals and objectives that drive the system requirements. As a result, the Systems Engineer is often provided with requirements that are vague, incomplete, and internally inconsistent. To complicate matters, most systems development methodologies assume that the customer provides unambiguous, comprehensive and concise requirements. This paper describes the specific steps of a Goals Analysis process applied by Systems Engineers at the NASA Langley Research Center during the formulation of requirements for research projects. The objective of Goals Analysis is to identify and explore all of the influencing factors that ultimately drive the system's requirements.

  14. Calcium Activities During Different Ion Exchange Separation Procedures

    NASA Astrophysics Data System (ADS)

    Zhang, Z.; Zhu, H.; Liu, Y.; Liu, F.; Zhang, C.; Sun, W.

    2014-12-01

    Calcium is a major element and participates in many geological processes. Investigations on stable calcium isotopic compositions of natural geological samples provide a great powerful tool to understand all kinds of those geological processes from a view of the field of isotope geochemistry. With the development of modern instruments and chemical separation techniques, calcium isotopic compositions could be determined even more precisely if the column chemistry brings no deviation. Usually, Calcium is separated from matrix elements using cation resin columns and the related chemical separation techniques seem to be robust. However, more detailed work still need to be done on matrix effects and calcium isotopic fractionations on column chemistry or during elution processes. If calcium is run on TIMS instruments, the interference effect could be lower and easier controlled, thus, the requirement to the chemistry is relatively not critic, but calcium fractionation on filaments could be much difficult to monitor. If calcium is run on MC-ICP-MS instruments, the interference effect could be huge and is really difficult to be recognized and subtracted, the requirement to the chemistry is much more critical in order to get a real result of the sample, but the instrument fractionation could be easier to monitor. Here we investigate calcium activities on several kinds of cation resins under different column/acid conditions. We seek to find a good balance between recovery and interference effect on column chemistry and are intend to set up a better chemical separation procedure to satisfy the instrument requirements for calcium. In addition, Calcium isotopic fractionation on column will also be discussed further here based on our previous and ongoing results.

  15. Development of a simplified procedure for cyclic structural analysis

    NASA Technical Reports Server (NTRS)

    Kaufman, A.

    1984-01-01

    Development was extended of a simplified inelastic analysis computer program (ANSYMP) for predicting the stress-strain history at the critical location of a thermomechanically cycled structure from an elastic solution. The program uses an iterative and incremental procedure to estimate the plastic strains from the material stress-strain properties and a plasticity hardening model. Creep effects can be calculated on the basis of stress relaxation at constant strain, creep at constant stress, or a combination of stress relaxation and creep accumulation. The simplified method was exercised on a number of problems involving uniaxial and multiaxial loading, isothermal and nonisothermal conditions, dwell times at various points in the cycles, different materials, and kinematic hardening. Good agreement was found between these analytical results and nonlinear finite-element solutions for these problems. The simplified analysis program used less than 1 percent of the CPU time required for a nonlinear finite-element analysis.

  16. Accuracy of remotely sensed data: Sampling and analysis procedures

    NASA Technical Reports Server (NTRS)

    Congalton, R. G.; Oderwald, R. G.; Mead, R. A.

    1982-01-01

    A review and update of the discrete multivariate analysis techniques used for accuracy assessment is given. A listing of the computer program written to implement these techniques is given. New work on evaluating accuracy assessment using Monte Carlo simulation with different sampling schemes is given. The results of matrices from the mapping effort of the San Juan National Forest is given. A method for estimating the sample size requirements for implementing the accuracy assessment procedures is given. A proposed method for determining the reliability of change detection between two maps of the same area produced at different times is given.

  17. Reduction procedures for accurate analysis of MSX surveillance experiment data

    NASA Technical Reports Server (NTRS)

    Gaposchkin, E. Mike; Lane, Mark T.; Abbot, Rick I.

    1994-01-01

    Technical challenges of the Midcourse Space Experiment (MSX) science instruments require careful characterization and calibration of these sensors for analysis of surveillance experiment data. Procedures for reduction of Resident Space Object (RSO) detections will be presented which include refinement and calibration of the metric and radiometric (and photometric) data and calculation of a precise MSX ephemeris. Examples will be given which support the reduction, and these are taken from ground-test data similar in characteristics to the MSX sensors and from the IRAS satellite RSO detections. Examples to demonstrate the calculation of a precise ephemeris will be provided from satellites in similar orbits which are equipped with S-band transponders.

  18. Whole-House Energy Analysis Procedures for Existing Homes: Preprint

    SciTech Connect

    Hendron, R.

    2006-08-01

    This paper describes a proposed set of guidelines for analyzing the energy savings achieved by a package of retrofits or an extensive rehabilitation of an existing home. It also describes certain field test and audit methods that can help establish accurate building system performance characteristics that are needed for a meaningful simulation of whole-house energy use. Several sets of default efficiency values have been developed for older appliances that cannot be easily tested and for which published specifications are not readily available. These proposed analysis procedures are documented more comprehensively in NREL Technical Report TP-550-38238.

  19. User's operating procedures. Volume 2: Scout project financial analysis program

    NASA Technical Reports Server (NTRS)

    Harris, C. G.; Haris, D. K.

    1985-01-01

    A review is presented of the user's operating procedures for the Scout Project Automatic Data system, called SPADS. SPADS is the result of the past seven years of software development on a Prime mini-computer located at the Scout Project Office, NASA Langley Research Center, Hampton, Virginia. SPADS was developed as a single entry, multiple cross-reference data management and information retrieval system for the automation of Project office tasks, including engineering, financial, managerial, and clerical support. This volume, two (2) of three (3), provides the instructions to operate the Scout Project Financial Analysis program in data retrieval and file maintenance via the user friendly menu drivers.

  20. Beta2-agonist extraction procedures for chromatographic analysis.

    PubMed

    dos Ramos, F J

    2000-06-01

    Normally, different procedures were necessary to prepare sample matrices for chromatographic determination of beta2-agonists. The present review includes sampling, pre-treatment and extraction/purification for urine, plasma, liver, meat, feeds, hair and milk powder, as previous steps for chromatographic analysis of beta2-agonists. Six methodologies were especially revised for extraction/purification namely, liquid-liquid extraction, solid-phase extraction (SPE), matrix solid-phase dispersion, immunoaffinity chromatography, dialysis and supercritical fluid extraction. SPE was discussed in detail and five mechanisms were described: adsorption, apolar, polar, ion-exchange and mixed phase. A brief conclusion in this field was also outlined.

  1. 50 CFR 300.206 - Alternative procedures for IUU fishing activities.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 50 Wildlife and Fisheries 11 2012-10-01 2012-10-01 false Alternative procedures for IUU fishing activities. 300.206 Section 300.206 Wildlife and Fisheries INTERNATIONAL FISHING AND RELATED ACTIVITIES... for IUU fishing activities. (a) These certification procedures may be applied to fish or fish...

  2. 50 CFR 300.206 - Alternative procedures for IUU fishing activities.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 50 Wildlife and Fisheries 9 2011-10-01 2011-10-01 false Alternative procedures for IUU fishing activities. 300.206 Section 300.206 Wildlife and Fisheries INTERNATIONAL FISHING AND RELATED ACTIVITIES... for IUU fishing activities. (a) These certification procedures may be applied to fish or fish...

  3. Performance analysis of a generalized upset detection procedure

    NASA Technical Reports Server (NTRS)

    Blough, Douglas M.; Masson, Gerald M.

    1987-01-01

    A general procedure for upset detection in complex systems, called the data block capture and analysis upset monitoring process is described and analyzed. The process consists of repeatedly recording a fixed amount of data from a set of predetermined observation lines of the system being monitored (i.e., capturing a block of data), and then analyzing the captured block in an attempt to determine whether the system is functioning correctly. The algorithm which analyzes the data blocks can be characterized in terms of the amount of time it requires to examine a given length data block to ascertain the existence of features/conditions that have been predetermined to characterize the upset-free behavior of the system. The performance of linear, quadratic, and logarithmic data analysis algorithms is rigorously characterized in terms of three performance measures: (1) the probability of correctly detecting an upset; (2) the expected number of false alarms; and (3) the expected latency in detecting upsets.

  4. Guide to IDAP, Version 2: an interactive decision analysis procedure

    SciTech Connect

    Jusko, M.J.; Whitfield, R.G.

    1980-11-01

    This document is intended to serve as both a programmer's and user's guide to the current version of the IDAP; and to prompt interested individuals into making suggestions for the future development of IDAP. The majority of the sections pertain to the main IDA program rather than to the IDAIN procedure. A brief discussion is presented of the theory of decision analysis. The aspects of decision analysis that are relevant to the IDAP are discussed. A complete list and description of the commands used in the IDAP program is provided and, including three complete examples. This section may be considered a user's guide to the IDAP. The programmer's guide to the IDAP discusses the various technical aspects of the programs, and may be skipped by users not involved with programming the IDAP. A list of the error messages generated by the IDAP is presented. As the program is developed, error handling and messages will improve.

  5. Neutron activation analysis system

    DOEpatents

    Taylor, M.C.; Rhodes, J.R.

    1973-12-25

    A neutron activation analysis system for monitoring a generally fluid media, such as slurries, solutions, and fluidized powders, including two separate conduit loops for circulating fluid samples within the range of radiation sources and detectors is described. Associated with the first loop is a neutron source that emits s high flux of slow and thermal neutrons. The second loop employs a fast neutron source, the flux from which is substantially free of thermal neutrons. Adjacent to both loops are gamma counters for spectrographic determination of the fluid constituents. Other gsmma sources and detectors are arranged across a portion of each loop for deterMining the fluid density. (Official Gazette)

  6. Three steps to safety: developing procedures for active shooters.

    PubMed

    Morris, Lisa W

    2014-01-01

    Every Second counts once gunshots are heard in the workplace environment. Close the office door, turn out the lights and turn the mobile phone to 'silent' is the standard mantra for what is expected in the response efforts; however, is that enough? In a perfect world, this sermon may fall short of what emergency management practitioners might preach as it does not adequately fulfil the reality of what is best practice for optimal life safety. This paper offers options for lockdown preparedness and response to address internal lockdown from the moment shots are fired. Recommendations for the creation of a lockdown plan, building assessment surveys and a controlled, simulated exercise are addressed to raise awareness in response methods and reduce overall response time. The procedures suggested in this paper will optimise training efforts using the community's standard emergency operating procedures in response to workplace violence to minimise loss of life.

  7. A numerical procedure for analysis of finite rate reacting flows

    NASA Technical Reports Server (NTRS)

    Shang, H. M.; Chen, Y. S.; Chen, Z. J.; Chen, C. P.; Wang, T. S.

    1993-01-01

    Combustion processes in rocket propulsion systems are characterized by the existence of multiple, vastly differing time and length scales, as well as flow-speeds at wide variation of Mach numbers. The chemical kinetics processes in the highly active reaction zone are characterized by much smaller scales compared to fluid convective and diffusive time scales. An operator splitting procedure for transient finite rate chemistry problems has been developed using a pressure based method, which can be applied to all speed flows without difficulties. The splitting of chemical kinetics terms formed the fluid-mechanical terms of the species equation ameliorated the difficulties associated with the disparate time scales and stiffness in the set of equations which describes highly exothermic combustion. A combined efficient ordinary differential equations (ODE) solver was used to integrate the effective chemical source terms over the residence time at each grid cell. One and two dimensional reacting flow situations were carried out to demonstrate and verify the current procedure. Different chemical kinetics with different degrees of nonlinearity have also been incorporated to test the robustness and generality of the proposed method.

  8. A numerical procedure for analysis of finite rate reacting flows

    NASA Astrophysics Data System (ADS)

    Shang, H. M.; Chen, Y. S.; Chen, Z. J.; Chen, C. P.; Wang, T. S.

    1993-07-01

    Combustion processes in rocket propulsion systems are characterized by the existence of multiple, vastly differing time and length scales, as well as flow-speeds at wide variation of Mach numbers. The chemical kinetics processes in the highly active reaction zone are characterized by much smaller scales compared to fluid convective and diffusive time scales. An operator splitting procedure for transient finite rate chemistry problems has been developed using a pressure based method, which can be applied to all speed flows without difficulties. The splitting of chemical kinetics terms formed the fluid-mechanical terms of the species equation ameliorated the difficulties associated with the disparate time scales and stiffness in the set of equations which describes highly exothermic combustion. A combined efficient ordinary differential equations (ODE) solver was used to integrate the effective chemical source terms over the residence time at each grid cell. One and two dimensional reacting flow situations were carried out to demonstrate and verify the current procedure. Different chemical kinetics with different degrees of nonlinearity have also been incorporated to test the robustness and generality of the proposed method.

  9. A MEMBRANE FILTER PROCEDURE FOR ASSAYING CYTOTOXIC ACTIVITY IN HETEROTROPHIC BACTERIA ISOLATED FROM DRINKING WATER

    EPA Science Inventory

    Cytotoxic activity assays of Gram-negative, heterotrophic bacteria are often laborious and time consuming. The objective of this study was to develop in situ procedures for testing potential cytotoxic activities of heterotrophic bacteria isolated from drinking water systems. Wate...

  10. Dynamic response analysis procedure for landfills with geosynthetic liners

    SciTech Connect

    Yegian, M.K.; Harb, J.N.; Kadakal, U.

    1998-10-01

    The dynamic response of geosynthetic interfaces commonly encountered in municipal solid waste landfills were investigated using a shaking table facility. The force-slip relationships for the tested interfaces showed almost rigid and then plastic deformation where the maximum shear force transmitted through the interface increases slightly with increasing slip. The force-slip relationships were modeled with equivalent stiffness and damping ratios. These equivalent parameters were established as a function of slip displacements to account for the nonlinear behavior of the interfaces. Using the equivalent stiffness and damping, the dynamic properties of an equivalent soil layer were established such that the dynamic response of the equivalent soil layer is similar to that of the geosynthetic interface it represents. The purpose of this representation was to allow the modeling of geosynthetic interfaces in wave propagation analysis, such as SHAKE analysis. The properties of the equivalent soil layer were validated by comparing the measured dynamic response of a rigid block placed on geosynthetics with that computed using the SHAKEW program and the properties of the equivalent soil layer developed. A procedure for analysis of the dynamic response of landfills with geosynthetic liners is proposed.

  11. Synfuel program analysis. Volume I. Procedures-capabilities

    SciTech Connect

    Muddiman, J. B.; Whelan, J. W.

    1980-07-01

    This is the first of the two volumes describing the analytic procedures and resulting capabilities developed by Resource Applications (RA) for examining the economic viability, public costs, and national benefits of alternative synfuel projects and integrated programs. This volume is intended for Department of Energy (DOE) and Synthetic Fuel Corporation (SFC) program management personnel and includes a general description of the costing, venture, and portfolio models with enough detail for the reader to be able to specifiy cases and interpret outputs. It also contains an explicit description (with examples) of the types of results which can be obtained when applied to: the analysis of individual projects; the analysis of input uncertainty, i.e., risk; and the analysis of portfolios of such projects, including varying technology mixes and buildup schedules. In all cases, the objective is to obtain, on the one hand, comparative measures of private investment requirements and expected returns (under differing public policies) as they affect the private decision to proceed, and, on the other, public costs and national benefits as they affect public decisions to participate (in what form, in what areas, and to what extent).

  12. Detailed Analysis of Peri-Procedural Strokes in Patients Undergoing Intracranial Stenting in SAMMPRIS

    PubMed Central

    Fiorella, David; Derdeyn, Colin P; Lynn, Michael J; Barnwell, Stanley L; Hoh, Brian L.; Levy, Elad I.; Harrigan, Mark R.; Klucznik, Richard P.; McDougall, Cameron G.; Pride, G. Lee; Zaidat, Osama O.; Lutsep, Helmi L.; Waters, Michael F.; Hourihane, J. Maurice; Alexandrov, Andrei V.; Chiu, David; Clark, Joni M.; Johnson, Mark D.; Torbey, Michel T.; Rumboldt, Zoran; Cloft, Harry J.; Turan, Tanya N.; Lane, Bethany F.; Janis, L. Scott; Chimowitz, Marc I.

    2012-01-01

    Background and Purpose Enrollment in the SAMMPRIS trial was halted due to the high risk of stroke or death within 30 days of enrollment in the percutaneous transluminal angioplasty and stenting (PTAS) arm relative to the medical arm. This analysis focuses on the patient and procedural factors that may have been associated with peri-procedural cerebrovascular events in the trial. Methods Bivariate and multivariate analyses were performed to evaluate whether patient and procedural variables were associated with cerebral ischemic or hemorrhagic events occurring within 30 days of enrollment (termed peri-procedural) in the PTAS arm. Results Of 224 patients randomized to PTAS, 213 underwent angioplasty alone (n=5) or with stenting (n=208). Of these, 13 had hemorrhagic strokes (7 parenchymal, 6 subarachnoid), 19 had ischemic stroke, and 2 had cerebral infarcts with temporary signs (CITS) within the peri-procedural period. Ischemic events were categorized as perforator occlusions (13), embolic (4), mixed perforator and embolic (2), and delayed stent occlusion (2). Multivariate analyses showed that higher percent stenosis, lower modified Rankin score, and clopidogrel load associated with an activated clotting time above the target range were associated (p ≤ 0.05) with hemorrhagic stroke. Non-smoking, basilar artery stenosis, diabetes, and older age were associated (p ≤ 0.05) with ischemic events. Conclusions Peri-procedural strokes in SAMMPRIS had multiple causes with the most common being perforator occlusion. Although risk factors for peri-procedural strokes could be identified, excluding patients with these features from undergoing PTAS to lower the procedural risk would limit PTAS to a small subset of patients. Moreover, given the small number of events, the present data should be used for hypothesis generation rather than to guide patient selection in clinical practice. PMID:22984008

  13. Bias in Student Survey Findings from Active Parental Consent Procedures

    ERIC Educational Resources Information Center

    Shaw, Thérèse; Cross, Donna; Thomas, Laura T.; Zubrick, Stephen R.

    2015-01-01

    Increasingly, researchers are required to obtain active (explicit) parental consent prior to surveying children and adolescents in schools. This study assessed the potential bias present in a sample of actively consented students, and in the estimates of associations between variables obtained from this sample. Students (n = 3496) from 36…

  14. Procedures manual for the ORNL Radiological Survey Activities (RASA) Program

    SciTech Connect

    Myrick, T.E.; Berven, B.A.; Cottrell, W.D.; Goldsmith, W.A.; Haywood, F.F.

    1987-04-01

    The portion of the radiological survey program performed by ORNL is the subject of this Procedures Manual. The RASA group of the Health and Safety Research Division (HASRD) at ORNL is responsible for the planning, conducting, and reporting of the results of radiological surveys at specified sites and associated vicinity properties. The results of these surveys are used by DOE in determining the need for and extent of remedial actions. Upon completion of the necessary remedial actions, the ORNL-RASA group or other OOS contractor may be called upon to verify the effectiveness of the remedial action. Information from these postremedial action surveys is included as part of the data base used by DOE in certifying a site for unrestricted use.

  15. Phosphorus Determination by Derivative Activation Analysis: A Multifaceted Radiochemical Application.

    ERIC Educational Resources Information Center

    Kleppinger, E. W.; And Others

    1984-01-01

    Although determination of phosphorus is important in biology, physiology, and environmental science, traditional gravimetric and colorimetric methods are cumbersome and lack the requisite sensitivity. Therefore, a derivative activation analysis method is suggested. Background information, procedures, and results are provided. (JN)

  16. Patent Network Analysis and Quadratic Assignment Procedures to Identify the Convergence of Robot Technologies

    PubMed Central

    Lee, Woo Jin; Lee, Won Kyung

    2016-01-01

    Because of the remarkable developments in robotics in recent years, technological convergence has been active in this area. We focused on finding patterns of convergence within robot technology using network analysis of patents in both the USPTO and KIPO. To identify the variables that affect convergence, we used quadratic assignment procedures (QAP). From our analysis, we observed the patent network ecology related to convergence and found technologies that have great potential to converge with other robotics technologies. The results of our study are expected to contribute to setting up convergence based R&D policies for robotics, which can lead new innovation. PMID:27764196

  17. Computing the surveillance error grid analysis: procedure and examples.

    PubMed

    Kovatchev, Boris P; Wakeman, Christian A; Breton, Marc D; Kost, Gerald J; Louie, Richard F; Tran, Nam K; Klonoff, David C

    2014-07-01

    The surveillance error grid (SEG) analysis is a tool for analysis and visualization of blood glucose monitoring (BGM) errors, based on the opinions of 206 diabetes clinicians who rated 4 distinct treatment scenarios. Resulting from this large-scale inquiry is a matrix of 337 561 risk ratings, 1 for each pair of (reference, BGM) readings ranging from 20 to 580 mg/dl. The computation of the SEG is therefore complex and in need of automation. The SEG software introduced in this article automates the task of assigning a degree of risk to each data point for a set of measured and reference blood glucose values so that the data can be distributed into 8 risk zones. The software's 2 main purposes are to (1) distribute a set of BG Monitor data into 8 risk zones ranging from none to extreme and (2) present the data in a color coded display to promote visualization. Besides aggregating the data into 8 zones corresponding to levels of risk, the SEG computes the number and percentage of data pairs in each zone and the number/percentage of data pairs above/below the diagonal line in each zone, which are associated with BGM errors creating risks for hypo- or hyperglycemia, respectively. To illustrate the action of the SEG software we first present computer-simulated data stratified along error levels defined by ISO 15197:2013. This allows the SEG to be linked to this established standard. Further illustration of the SEG procedure is done with a series of previously published data, which reflect the performance of BGM devices and test strips under various environmental conditions. We conclude that the SEG software is a useful addition to the SEG analysis presented in this journal, developed to assess the magnitude of clinical risk from analytically inaccurate data in a variety of high-impact situations such as intensive care and disaster settings. PMID:25562887

  18. Time/Loss Analysis in the development and evaluation of emergency response procedures

    SciTech Connect

    Francis, A.A.

    1994-08-01

    Time/Loss Analysis (T/LA) provides a standard for conducting technically consistent and objective evaluations of emergency response planning and procedures. T/LA is also a sound tool for evaluating the performance of safeguards and procedures.

  19. Forensic analysis of Salvia divinorum using multivariate statistical procedures. Part I: discrimination from related Salvia species.

    PubMed

    Willard, Melissa A Bodnar; McGuffin, Victoria L; Smith, Ruth Waddell

    2012-01-01

    Salvia divinorum is a hallucinogenic herb that is internationally regulated. In this study, salvinorin A, the active compound in S. divinorum, was extracted from S. divinorum plant leaves using a 5-min extraction with dichloromethane. Four additional Salvia species (Salvia officinalis, Salvia guaranitica, Salvia splendens, and Salvia nemorosa) were extracted using this procedure, and all extracts were analyzed by gas chromatography-mass spectrometry. Differentiation of S. divinorum from other Salvia species was successful based on visual assessment of the resulting chromatograms. To provide a more objective comparison, the total ion chromatograms (TICs) were subjected to principal components analysis (PCA). Prior to PCA, the TICs were subjected to a series of data pretreatment procedures to minimize non-chemical sources of variance in the data set. Successful discrimination of S. divinorum from the other four Salvia species was possible based on visual assessment of the PCA scores plot. To provide a numerical assessment of the discrimination, a series of statistical procedures such as Euclidean distance measurement, hierarchical cluster analysis, Student's t tests, Wilcoxon rank-sum tests, and Pearson product moment correlation were also applied to the PCA scores. The statistical procedures were then compared to determine the advantages and disadvantages for forensic applications.

  20. Active Reading Procedures for Moderating the Effects of Poor Highlighting

    ERIC Educational Resources Information Center

    Gier, Vicki S.; Herring, Daniel; Hudnell, Jason; Montoya, Jodi; Kreiner, David S.

    2010-01-01

    We investigated two active reading techniques intended to eliminate the negative effect on reading comprehension of preexisting, inappropriate highlighting. College students read passages in three highlighting conditions: no highlighting, appropriate highlighting, and inappropriate highlighting. In Experiment 1, 30 students read the passages while…

  1. Preamplification Procedure for the Analysis of Ancient DNA Samples

    PubMed Central

    Del Gaudio, Stefania; Cirillo, Alessandra; Di Bernardo, Giovanni; Galderisi, Umberto; Thanassoulas, Theodoros; Pitsios, Theodoros; Cipollaro, Marilena

    2013-01-01

    In ancient DNA studies the low amount of endogenous DNA represents a limiting factor that often hampers the result achievement. In this study we extracted the DNA from nine human skeletal remains of different ages found in the Byzantine cemetery of Abdera Halkidiki and in the medieval cemetery of St. Spiridion in Rhodes (Greece). Real-time quantitative polymerase chain reaction (qPCR) was used to detect in the extracts the presence of PCR inhibitors and to estimate the DNA content. As mitochondrial DNA was detected in all samples, amplification of nuclear targets, as amelogenin and the polymorphism M470V of the transmembrane conductance regulator gene, yielded positive results in one case only. In an effort to improve amplification success, we applied, for the first time in ancient DNA, a preamplification strategy based on TaqMan PreAmp Master Mix. A comparison between results obtained from nonpreamplified and preamplified samples is reported. Our data, even if preliminary, show that the TaqMan PreAmp procedure may improve the sensitivity of qPCR analysis. PMID:24187523

  2. Antimicrobial activity of cationic peptides in endodontic procedures

    PubMed Central

    Winfred, Sofi Beaula; Meiyazagan, Gowri; Panda, Jiban J.; Nagendrababu, Venkateshbabu; Deivanayagam, Kandaswamy; Chauhan, Virander S.; Venkatraman, Ganesh

    2014-01-01

    Objectives: The present study aimed to investigate the antimicrobial and biofilm inhibition activity of synthetic antimicrobial peptides (AMPs) against microbes such as Enterococcus faecalis, Staphylococcus aureus, and Candida albicans which are involved in endodontic infections. Materials and Methods: Agar diffusion test was done to determine the activity of peptides. The morphological changes in E. faecalis and reduction in biofilm formation after treatment with peptides were observed using scanning electron microscope. The efficacy of peptides using an ex vivo dentinal model was determined by polymerase chain reaction and confocal laser scanning microscopy. Platelet aggregation was done to determine the biocompatibility of peptides. Results: Among 11 peptides, two of the amphipathic cationic peptides were found to be highly active against E. faecalis, S. aureus, C. albicans. Efficacy results using dentinal tubule model showed significant reduction in microbial load at 400 μm depth. The peptides were also biocompatible. Conclusion: These results suggest that synthetic AMPs have the potential to be developed as antibacterial agents against microorganisms involved in dental infections and thus could prevent the spread and persistence of endodontic infections improving treatment outcomes and teeth preservation. PMID:24966779

  3. 12 CFR 225.27 - Procedures for determining scope of nonbanking activities.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... THE FEDERAL RESERVE SYSTEM BANK HOLDING COMPANIES AND CHANGE IN BANK CONTROL (REGULATION Y) Regulations Nonbanking Activities and Acquisitions by Bank Holding Companies § 225.27 Procedures for... with the provisions of § 262.3(e) of the Board's Rules of Procedure (12 CFR 262.3(e))....

  4. 13 CFR 101.402 - What procedures apply to the selection of SBA programs and activities?

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false What procedures apply to the selection of SBA programs and activities? 101.402 Section 101.402 Business Credit and Assistance SMALL BUSINESS ADMINISTRATION ADMINISTRATION Intergovernmental Partnership § 101.402 What procedures apply to...

  5. 75 FR 18211 - Agency Information Collection Activities; Proposed Collection; Comment Request; Procedures for...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-09

    ... principles in food processing. Further, the burdens have been estimated using typical small seafood... HUMAN SERVICES Food and Drug Administration Agency Information Collection Activities; Proposed Collection; Comment Request; Procedures for the Safe and Sanitary Processing and Importing of Fish...

  6. A new preoxygenation procedure for extravehicular activity (EVA)

    NASA Technical Reports Server (NTRS)

    Webb, J. T.; Pilmanis, A. A.

    1998-01-01

    A 10.2 psi staged-decompression schedule or a 4-hour preoxygenation at 14.7 psi is required prior to extravehicular activity (EVA) to reduce decompression sickness (DCS) risk. Results of recent research at the Air Force Research Laboratory (AFRL) showed that a 1-hour resting preoxygenation followed by a 4-hour, 4.3 psi exposure resulted in 77% DCS risk (N=26), while the same profile beginning with 10 min of exercise at 75% of VO2peak during preoxygenation reduced the DCS risk to 42% (P<.03; N=26). A 4-hour preoxygenation without exercise followed by the 4.3 psi exposure resulted in 47% DCS risk (N=30). The 1-hour preoxygenation with exercise and the 4-hour preoxygenation without exercise results were not significantly different. Elimination of either 3 hours of preoxygenation or 12 hours of staged-decompression are compelling reasons to consider incorporation of exercise-enhanced preoxygenation.

  7. A new preoxygenation procedure for extravehicular activity (EVA).

    PubMed

    Webb, J T; Pilmanis, A A

    1998-01-01

    A 10.2 psi staged-decompression schedule or a 4-hour preoxygenation at 14.7 psi is required prior to extravehicular activity (EVA) to reduce decompression sickness (DCS) risk. Results of recent research at the Air Force Research Laboratory (AFRL) showed that a 1-hour resting preoxygenation followed by a 4-hour, 4.3 psi exposure resulted in 77% DCS risk (N=26), while the same profile beginning with 10 min of exercise at 75% of VO2peak during preoxygenation reduced the DCS risk to 42% (P<.03; N=26). A 4-hour preoxygenation without exercise followed by the 4.3 psi exposure resulted in 47% DCS risk (N=30). The 1-hour preoxygenation with exercise and the 4-hour preoxygenation without exercise results were not significantly different. Elimination of either 3 hours of preoxygenation or 12 hours of staged-decompression are compelling reasons to consider incorporation of exercise-enhanced preoxygenation.

  8. The Analysis of Rates of Naval Compensation by the Use of a Structured Job Analysis Procedure.

    ERIC Educational Resources Information Center

    Harris, Alma F.; McCormick, Ernest J.

    The study deals with the experimental application of a structured job analysis procedure to enlisted and officer billets in the Navy, with particular reference to its potential use in relating naval compensation for billet incumbents to compensation for civilian jobs with similar characteristics, and in assessing its utility for allocating naval…

  9. Neutron activation analysis of Etruscan pottery

    SciTech Connect

    Whitehead, J.; Silverman, A.; Ouellet, C.G.; Clark, D.D.; Hossain, T.Z

    1992-07-01

    Neutron activation analysis (NAA) has been widely used in archaeology for compositional analysis of pottery samples taken from sites of archaeological importance. Elemental profiles can determine the place of manufacture. At Cornell, samples from an Etruscan site near Siena, Italy, are being studied. The goal of this study is to compile a trace element concentration profile for a large number of samples. These profiles will be matched with an existing data bank in an attempt to understand the place of origin for these samples. The 500 kW TRIGA reactor at the Ward Laboratory is used to collect NAA data for these samples. Experiments were done to set a procedure for the neutron activation analysis with respect to sample preparation, selection of irradiation container, definition of activation and counting parameters and data reduction. Currently, we are able to analyze some 27 elements in samples of mass 500 mg with a single irradiation of 4 hours and two sequences of counting. Our sensitivity for many of the trace elements is better than 1 ppm by weight under the conditions chosen. In this talk, details of our procedure, including quality assurance as measured by NIST standard reference materials, will be discussed. In addition, preliminary results from data treatment using cluster analysis will be presented. (author)

  10. Inverse procedure for high-latitude ionospheric electrodynamics: Analysis of satellite-borne magnetometer data

    NASA Astrophysics Data System (ADS)

    Matsuo, Tomoko; Knipp, Delores J.; Richmond, Arthur D.; Kilcommons, Liam; Anderson, Brian J.

    2015-06-01

    This paper presents an analysis of data from the magnetometers on board the Defense Meteorological Satellite Program (DMSP) F-15, F-16, F-17, and F-18 satellites and the Iridium satellite constellation, using an inverse procedure for high-latitude ionospheric electrodynamics, during the period of 29-30 May 2010. The Iridium magnetometer data are made available through the Active Magnetosphere and Planetary Electrodynamics Response Experiment (AMPERE) program. The method presented here is built upon the assimilative mapping of ionospheric electrodynamics procedure but with a more complete treatment of the prior model uncertainty to facilitate an optimal inference of complete polar maps of electrodynamic variables from irregularly distributed observational data. The procedure can provide an objective measure of uncertainty associated with the analysis. The cross-validation analysis, in which the DMSP data are used as independent validation data sets, suggests that the procedure yields the spatial prediction of DMSP perturbation magnetic fields from AMPERE data alone with a median discrepancy of 30-50 nT. Discrepancies larger than 100 nT are seen in about 20% of total samples, whose location and magnitude are generally consistent with the previously identified discrepancy between DMSP and AMPERE data sets. Resulting field-aligned current (FAC) patterns exhibit more distinct spatial patterns without spurious high-frequency oscillatory features in comparison to the FAC products provided by AMPERE. Maps of the toroidal magnetic potential and FAC estimated from both AMPERE and DMSP data under four distinctive interplanetary magnetic field (IMF) conditions during a magnetic cloud event demonstrate the IMF control of high-latitude electrodynamics and the opportunity for future scientific investigation.

  11. Element-by-element Solution Procedures for Nonlinear Structural Analysis

    NASA Technical Reports Server (NTRS)

    Hughes, T. J. R.; Winget, J. M.; Levit, I.

    1984-01-01

    Element-by-element approximate factorization procedures are proposed for solving the large finite element equation systems which arise in nonlinear structural mechanics. Architectural and data base advantages of the present algorithms over traditional direct elimination schemes are noted. Results of calculations suggest considerable potential for the methods described.

  12. Continuous monitoring, automated analysis, and sampling procedures. [Review (63 references)

    SciTech Connect

    Pitt, W.W. Jr.

    1981-06-01

    This article emphasizes the the need for a well documented quality control system in waste water monitoring and sampling procedures. The US EPA has continued its strong emphasis on effluent monitoring and has published a list of 155 organic chemicals and 23 plastic or synthetic materials industries for which it proposed to require monitoring the process waste water under the Clean Water Act. (KRM)

  13. Application of modified in vitro screening procedure for identifying herbals possessing sulfonylurea-like activity.

    PubMed

    Rotshteyn, Y; Zito, S W

    2004-08-01

    We describe here the application of a modified in vitro procedure for identifying herbs potentially possessing sulfonylurea-like activity. The procedure consists of the combination of an SUR1 receptor binding assay and an insulin secretion assay in cultures of HIT-T15 cells. This procedure could be used as an initial step in identifying new safe and efficacious agents for the management of Type II diabetes. The application of this screening procedure to a set of selected herbs produced results that were consistent with the previously reported properties of those herbs. The collected data suggest that the hypoglycemic properties of bitter melon (Momordica charantia, Linn. Family, Cucurbitacea), cerasse (Momordica charantia, Linn. wild variety, Family, Cucurbitacea) and American ginseng (Panax quinquefolius, Linn., Family Araliacea) are at least partially due to their sulfonylurea-like activity.

  14. The Risky Situation: A Procedure for Assessing the Father-Child Activation Relationship

    ERIC Educational Resources Information Center

    Paquette, Daniel; Bigras, Marc

    2010-01-01

    Initial validation data are presented for the Risky Situation (RS), a 20-minute observational procedure designed to assess the father-child activation relationship with children aged 12-18 months. The coding grid, which is simple and easy to use, allows parent-child dyads to be classified into three categories and provides an activation score. By…

  15. NASTRAN/FLEXSTAB procedure for static aeroelastic analysis

    NASA Technical Reports Server (NTRS)

    Schuster, L. S.

    1984-01-01

    Presented is a procedure for using the FLEXSTAB External Structural Influence Coefficients (ESIC) computer program to produce the structural data necessary for the FLEXSTAB Stability Derivatives and Static Stability (SD&SS) program. The SD&SS program computes trim state, stability derivatives, and pressure and deflection data for a flexible airplane having a plane of symmetry. The procedure used a NASTRAN finite-element structural model as the source of structural data in the form of flexibility matrices. Selection of a set of degrees of freedom, definition of structural nodes and panels, reordering and reformatting of the flexibility matrix, and redistribution of existing point mass data are among the topics discussed. Also discussed are boundary conditions and the NASTRAN substructuring technique.

  16. Analysis of the control exerted by a complex cooperation procedure.

    PubMed

    Hake, D F; Vukelich, R

    1973-01-01

    The study examined the effects of the availability of a non-cooperative response on cooperative responding when cooperation did not have to result in an equal distribution of work or reinforcers. Also, an attempt was made to determine if the cooperative responding was under the control of the cooperation procedure. Pairs of institutionalized retardates were tested in full view of each other. For each subject, reinforcers (money) were contingent upon responses on each of two panels: (1) a matching panel for working matching-to-sample problems, and (2) a sample panel for producing the sample stimulus. The matching panels of the two subjects were 6 m apart, but a subject's sample panel could be placed at different distances from his matching panel. For each subject, either his own or his partner's sample panel could be nearest his matching panel such that less walking was required to reach one sample panel than the other. Subjects could work either individually, by producing their own sample stimulus, or cooperatively, by producing the sample stimulus for their partner. Subjects selected whichever solution involved the least amount of walking. The importance of testing for control by the cooperation procedure was indicated by the findings that cooperative-like responses were not always under the control of the cooperation procedure. PMID:4706235

  17. Analysis of the control exerted by a complex cooperation procedure.

    PubMed

    Hake, D F; Vukelich, R

    1973-01-01

    The study examined the effects of the availability of a non-cooperative response on cooperative responding when cooperation did not have to result in an equal distribution of work or reinforcers. Also, an attempt was made to determine if the cooperative responding was under the control of the cooperation procedure. Pairs of institutionalized retardates were tested in full view of each other. For each subject, reinforcers (money) were contingent upon responses on each of two panels: (1) a matching panel for working matching-to-sample problems, and (2) a sample panel for producing the sample stimulus. The matching panels of the two subjects were 6 m apart, but a subject's sample panel could be placed at different distances from his matching panel. For each subject, either his own or his partner's sample panel could be nearest his matching panel such that less walking was required to reach one sample panel than the other. Subjects could work either individually, by producing their own sample stimulus, or cooperatively, by producing the sample stimulus for their partner. Subjects selected whichever solution involved the least amount of walking. The importance of testing for control by the cooperation procedure was indicated by the findings that cooperative-like responses were not always under the control of the cooperation procedure.

  18. Computer-based procedure for field activities: Results from three evaluations at nuclear power plants

    SciTech Connect

    Oxstrand, Johanna; bly, Aaron; LeBlanc, Katya

    2014-09-01

    Nearly all activities that involve human interaction with the systems of a nuclear power plant are guided by procedures. The paper-based procedures (PBPs) currently used by industry have a demonstrated history of ensuring safety; however, improving procedure use could yield tremendous savings in increased efficiency and safety. One potential way to improve procedure-based activities is through the use of computer-based procedures (CBPs). Computer-based procedures provide the opportunity to incorporate context driven job aids, such as drawings, photos, just-in-time training, etc into CBP system. One obvious advantage of this capability is reducing the time spent tracking down the applicable documentation. Additionally, human performance tools can be integrated in the CBP system in such way that helps the worker focus on the task rather than the tools. Some tools can be completely incorporated into the CBP system, such as pre-job briefs, placekeeping, correct component verification, and peer checks. Other tools can be partly integrated in a fashion that reduces the time and labor required, such as concurrent and independent verification. Another benefit of CBPs compared to PBPs is dynamic procedure presentation. PBPs are static documents which limits the degree to which the information presented can be tailored to the task and conditions when the procedure is executed. The CBP system could be configured to display only the relevant steps based on operating mode, plant status, and the task at hand. A dynamic presentation of the procedure (also known as context-sensitive procedures) will guide the user down the path of relevant steps based on the current conditions. This feature will reduce the user’s workload and inherently reduce the risk of incorrectly marking a step as not applicable and the risk of incorrectly performing a step that should be marked as not applicable. As part of the Department of Energy’s (DOE) Light Water Reactors Sustainability Program

  19. Activation analysis using Cornell TRIGA

    SciTech Connect

    Hossain, Tim Z.

    1994-07-01

    A major use of the Cornell TRIGA is for activation analysis. Over the years many varieties of samples have been analyzed from a number of fields of interest ranging from geology, archaeology and textiles. More recently the analysis has been extended to high technology materials for applications in optical and semiconductor devices. Trace analysis in high purity materials like Si wafers has been the focus in many instances, while in others analysis of major/minor components were the goals. These analysis has been done using the delayed mode. Results from recent measurements in semiconductors and other materials will be presented. In addition the near future capability of using prompt gamma activation analysis using the Cornell cold neutron beam will be discussed. (author)

  20. Standardizing Activation Analysis: New Software for Photon Activation Analysis

    SciTech Connect

    Sun, Z. J.; Wells, D.; Green, J.; Segebade, C.

    2011-06-01

    Photon Activation Analysis (PAA) of environmental, archaeological and industrial samples requires extensive data analysis that is susceptible to error. For the purpose of saving time, manpower and minimizing error, a computer program was designed, built and implemented using SQL, Access 2007 and asp.net technology to automate this process. Based on the peak information of the spectrum and assisted by its PAA library, the program automatically identifies elements in the samples and calculates their concentrations and respective uncertainties. The software also could be operated in browser/server mode, which gives the possibility to use it anywhere the internet is accessible. By switching the nuclide library and the related formula behind, the new software can be easily expanded to neutron activation analysis (NAA), charged particle activation analysis (CPAA) or proton-induced X-ray emission (PIXE). Implementation of this would standardize the analysis of nuclear activation data. Results from this software were compared to standard PAA analysis with excellent agreement. With minimum input from the user, the software has proven to be fast, user-friendly and reliable.

  1. Accuracy of the Parallel Analysis Procedure with Polychoric Correlations

    ERIC Educational Resources Information Center

    Cho, Sun-Joo; Li, Feiming; Bandalos, Deborah

    2009-01-01

    The purpose of this study was to investigate the application of the parallel analysis (PA) method for choosing the number of factors in component analysis for situations in which data are dichotomous or ordinal. Although polychoric correlations are sometimes used as input for component analyses, the random data matrices generated for use in PA…

  2. Procedure for conducting a human-reliability analysis for nuclear power plants. Final report

    SciTech Connect

    Bell, B.J.; Swain, A.D.

    1983-05-01

    This document describes in detail a procedure to be followed in conducting a human reliability analysis as part of a probabilistic risk assessment when such an analysis is performed according to the methods described in NUREG/CR-1278, Handbook for Human Reliability Analysis with Emphasis on Nuclear Power Plant Applications. An overview of the procedure describing the major elements of a human reliability analysis is presented along with a detailed description of each element and an example of an actual analysis. An appendix consists of some sample human reliability analysis problems for further study.

  3. RECOMMENDED OPERATING PROCEDURE NO. 45: ANALYSIS OF NITROUS OXIDE FROM COMBUSTION SOURCES

    EPA Science Inventory

    The recommended operating procedure (ROP) has been prepared for use in research activities conducted by EPA's Air and Energy Engineering Research Laboratory (AEERL). he procedure applies to the measurement of nitrous oxide (N2O) in dry gas samples extracted from gas streams where...

  4. 75 FR 48553 - Supplement to Commission Procedures During Periods of Emergency Operations Requiring Activation...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-11

    ... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF ENERGY Federal Energy Regulatory Commission 18 CFR Part 376 Supplement to Commission Procedures During Periods of Emergency Operations Requiring Activation of Continuity of Operations Plan Issued August 5, 2010....

  5. Object relations theory and activity theory: a proposed link by way of the procedural sequence model.

    PubMed

    Ryle, A

    1991-12-01

    An account of object relations theory (ORT), represented in terms of the procedural sequence model (PSM), is compared to the ideas of Vygotsky and activity theory (AT). The two models are seen to be compatible and complementary and their combination offers a satisfactory account of human psychology, appropriate for the understanding and integration of psychotherapy. PMID:1786224

  6. Operational Control Procedures for the Activated Sludge Process, Part I - Observations, Part II - Control Tests.

    ERIC Educational Resources Information Center

    West, Alfred W.

    This is the first in a series of documents developed by the National Training and Operational Technology Center describing operational control procedures for the activated sludge process used in wastewater treatment. Part I of this document deals with physical observations which should be performed during each routine control test. Part II…

  7. 77 FR 3843 - Agency Information Collection (Procedures, and Security for Government Financing) Activities...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-25

    ... AFFAIRS Agency Information Collection (Procedures, and Security for Government Financing) Activities Under..., Security for Government Financing. OMB Control Number: 2900-0688. Type of Review: Extension of a currently.... b. VAAR 832.202-4, Security for Government Financing--10 hours. Estimated Average Burden...

  8. Rhythms of Dialogue and Referential Activity: Implicit Process across Procedural and Verbal Realms

    ERIC Educational Resources Information Center

    Ritter, Michael S.

    2009-01-01

    This work examines the relationship between implicit procedural and implicit verbal processes as they occur in natural adult conversation. Theoretical insights and empirical findings are rooted in a move towards integration of Bucci's "Referential Activity" (RA) and "Multiple Code" perspectives and Beebe and Jaffe's "Dyadic Systems" and "Rhythms…

  9. CTEPP STANDARD OPERATING PROCEDURE FOR TRANSLATING VIDEOTAPES OF CHILD ACTIVITIES (SOP-4.13)

    EPA Science Inventory

    The EPA will conduct a two-day video translation workshop to demonstrate to coders the procedures for translating the activity patterns of preschool children on videotape. The coders will be required to pass reliability tests to successfully complete the training requirements of ...

  10. Responding to Self-Harm: A Documentary Analysis of Agency Policy and Procedure

    ERIC Educational Resources Information Center

    Paul, Sally; Hill, Malcolm

    2013-01-01

    This paper reports on the findings of a documentary analysis of policies and procedures relating to self-harm from a range of organisations working with young people in the UK. It identifies the extent to which policies and/or procedures relating to self-harm are available for service providers and offers a wider understanding of the concepts of…

  11. 40 CFR 246.200-8 - Recommended procedures: Cost analysis.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Collection of Residential, Commercial and Institutional Solid Wastes (40 CFR part 243) and Thermal Processing and Land Disposal Guidelines (40 CFR parts 240 and 241) should be included in the analysis....

  12. 40 CFR 246.201-7 - Recommended procedures: Cost analysis.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Residential, Commercial and Institutional Solid Wastes (40 CFR part 243) and Thermal Processing and Land Disposal Guidelines (40 CFR parts 240 and 241) should be included in the analysis. In formulating...

  13. 40 CFR 246.200-8 - Recommended procedures: Cost analysis.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Collection of Residential, Commercial and Institutional Solid Wastes (40 CFR part 243) and Thermal Processing and Land Disposal Guidelines (40 CFR parts 240 and 241) should be included in the analysis....

  14. 40 CFR 246.202-6 - Recommended procedures: Cost analysis.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Residential, Commercial and Institutional Solid Wastes (40 CFR part 243) and Thermal Processing and Land Disposal Guidelines (40 CFR parts 240 and 241) should be included in the analysis. This cost...

  15. 40 CFR 246.202-6 - Recommended procedures: Cost analysis.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Residential, Commercial and Institutional Solid Wastes (40 CFR part 243) and Thermal Processing and Land Disposal Guidelines (40 CFR parts 240 and 241) should be included in the analysis. This cost...

  16. 40 CFR 246.201-7 - Recommended procedures: Cost analysis.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Residential, Commercial and Institutional Solid Wastes (40 CFR part 243) and Thermal Processing and Land Disposal Guidelines (40 CFR parts 240 and 241) should be included in the analysis. In formulating...

  17. Terrain-analysis procedures for modeling radar backscatter

    USGS Publications Warehouse

    Schaber, Gerald G.; Pike, Richard J.; Berlin, Graydon Lennis

    1978-01-01

    The collection and analysis of detailed information on the surface of natural terrain are important aspects of radar-backscattering modeling. Radar is especially sensitive to surface-relief changes in the millimeter- to-decimeter scale four conventional K-band (~1-cm wavelength) to L-band (~25-cm wavelength) radar systems. Surface roughness statistics that characterize these changes in detail have been generated by a comprehensive set of seven programmed calculations for radar-backscatter modeling from sets of field measurements. The seven programs are 1) formatting of data in readable form for subsequent topographic analysis program; 2) relief analysis; 3) power spectral analysis; 4) power spectrum plots; 5) slope angle between slope reversals; 6) slope angle against slope interval plots; and 7) base length slope angle and curvature. This complete Fortran IV software package, 'Terrain Analysis', is here presented for the first time. It was originally developed a decade ago for investigations of lunar morphology and surface trafficability for the Apollo Lunar Roving Vehicle.

  18. Analysis of generalized Schwarz alternating procedure for domain decomposition

    SciTech Connect

    Engquist, B.; Zhao, Hongkai

    1996-12-31

    The Schwartz alternating method(SAM) is the theoretical basis for domain decomposition which itself is a powerful tool both for parallel computation and for computing in complicated domains. The convergence rate of the classical SAM is very sensitive to the overlapping size between each subdomain, which is not desirable for most applications. We propose a generalized SAM procedure which is an extension of the modified SAM proposed by P.-L. Lions. Instead of using only Dirichlet data at the artificial boundary between subdomains, we take a convex combination of u and {partial_derivative}u/{partial_derivative}n, i.e. {partial_derivative}u/{partial_derivative}n + {Lambda}u, where {Lambda} is some {open_quotes}positive{close_quotes} operator. Convergence of the modified SAM without overlapping in a quite general setting has been proven by P.-L.Lions using delicate energy estimates. The important questions remain for the generalized SAM. (1) What is the most essential mechanism for convergence without overlapping? (2) Given the partial differential equation, what is the best choice for the positive operator {Lambda}? (3) In the overlapping case, is the generalized SAM superior to the classical SAM? (4) What is the convergence rate and what does it depend on? (5) Numerically can we obtain an easy to implement operator {Lambda} such that the convergence is independent of the mesh size. To analyze the convergence of the generalized SAM we focus, for simplicity, on the Poisson equation for two typical geometry in two subdomain case.

  19. Procedures for analysis of debris relative to Space Shuttle systems

    NASA Technical Reports Server (NTRS)

    Kim, Hae Soo; Cummings, Virginia J.

    1993-01-01

    Debris samples collected from various Space Shuttle systems have been submitted to the Microchemical Analysis Branch. This investigation was initiated to develop optimal techniques for the analysis of debris. Optical microscopy provides information about the morphology and size of crystallites, particle sizes, amorphous phases, glass phases, and poorly crystallized materials. Scanning electron microscopy with energy dispersive spectrometry is utilized for information on surface morphology and qualitative elemental content of debris. Analytical electron microscopy with wavelength dispersive spectrometry provides information on the quantitative elemental content of debris.

  20. Conducting On-Farm Animal Research: Procedures & Economic Analysis.

    ERIC Educational Resources Information Center

    Amir, Pervaiz; Knipscheer, Hendrik C.

    This book is intended to give animal scientists elementary tools to perform on-farm livestock analysis and to provide crop-oriented farming systems researchers with methods for conducting animal research. Chapter 1 describes farming systems research as a systems approach to on-farm animal research. Chapter 2 outlines some important…

  1. Analysis of Relational Communication in Dyads: New Measurement Procedures.

    ERIC Educational Resources Information Center

    Rogers, L. Edna; Farace, Richard

    Relational communication refers to the control or dominance aspects of message exchange in dyads--distinguishing it from the report or referential aspects of communication. In relational communicational analysis, messages as transactions are emphasized; major theoretical concepts which emerge are symmetry, transitoriness, and complementarity of…

  2. Radiological survey activities: uranium mill tailings remedial action project procedures manual

    SciTech Connect

    Little, C.A.; Berven, B.A.; Carter, T.E.; Espegren, M.L.; O'Donnell, F.R.; Ramos, S.J.; Retolaza, C.D.; Rood, A.S.; Santos, F.A.; Witt, D.A.

    1986-07-01

    The US Department of Energy (DOE) was assigned the responsibility for conducting remedial action at 24 sites, which are located in one eastern and nine western states. The DOE's responsibilities are being met through its Uranium Mill Tailings Remedial Action Project Office (UMTRA-PO) in Albuquerque, New Mexico. The purpose of this Procedures Manual is to provide a standardized set of procedures that document in an auditable manner the activities performed by the Radiological Survey Activities (RASA) group in the Dosimetry and Biophysical Transport Section (DABTS) of the Health and Safety Research Division (HASRD) at the Oak Ridge National Laboratory (ORNL), in its role as the Inclusion Survey Contractor (ISC). Members of the RASA group assigned to the UMTRA Project are headquartered in the ORNL/RASA office in Grand Junction, Colorado, and report to the ORNL/RASA Project Manager. The Procedures Manual ensures that the organizational, administrative, and technical activities of the RASA/UMTRA group conform properly to those of the ISC as described in the Vicinity Properties Management and Implementation Manual and the Summary Protocol. This manual also ensures that the techniques and procedures used by the RASA/UMTRA group and contractor personnel meet the requirements of applicable governmental, scientific, and industrial standards.

  3. 31 CFR 1025.520 - Special information sharing procedures to deter money laundering and terrorist activity for...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... to deter money laundering and terrorist activity for insurance companies. 1025.520 Section 1025.520... Procedures To Deter Money Laundering and Terrorist Activity § 1025.520 Special information sharing procedures to deter money laundering and terrorist activity for insurance companies. (a) Refer to § 1010.520...

  4. 31 CFR 1028.520 - Special information sharing procedures to deter money laundering and terrorist activity for...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... to deter money laundering and terrorist activity for operators of credit card systems. 1028.520... Special Information Sharing Procedures To Deter Money Laundering and Terrorist Activity § 1028.520 Special information sharing procedures to deter money laundering and terrorist activity for operators of credit...

  5. 31 CFR 1025.520 - Special information sharing procedures to deter money laundering and terrorist activity for...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... to deter money laundering and terrorist activity for insurance companies. 1025.520 Section 1025.520... Procedures To Deter Money Laundering and Terrorist Activity § 1025.520 Special information sharing procedures to deter money laundering and terrorist activity for insurance companies. (a) Refer to § 1010.520...

  6. 31 CFR 1028.520 - Special information sharing procedures to deter money laundering and terrorist activity for...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... to deter money laundering and terrorist activity for operators of credit card systems. 1028.520... Special Information Sharing Procedures To Deter Money Laundering and Terrorist Activity § 1028.520 Special information sharing procedures to deter money laundering and terrorist activity for operators of credit...

  7. 31 CFR 1028.520 - Special information sharing procedures to deter money laundering and terrorist activity for...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... to deter money laundering and terrorist activity for operators of credit card systems. 1028.520... Special Information Sharing Procedures To Deter Money Laundering and Terrorist Activity § 1028.520 Special information sharing procedures to deter money laundering and terrorist activity for operators of credit...

  8. 31 CFR 1029.520 - Special information sharing procedures to deter money laundering and terrorist activity for loan...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... to deter money laundering and terrorist activity for loan or finance companies. 1029.520 Section 1029... Sharing Procedures To Deter Money Laundering and Terrorist Activity § 1029.520 Special information sharing procedures to deter money laundering and terrorist activity for loan or finance companies. (a) Refer...

  9. 31 CFR 1030.520 - Special information sharing procedures to deter money laundering and terrorist activity for...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... to deter money laundering and terrorist activity for housing government sponsored enterprises. 1030... ENTERPRISES Special Information Sharing Procedures To Deter Money Laundering and Terrorist Activity § 1030.520 Special information sharing procedures to deter money laundering and terrorist activity for...

  10. 31 CFR 1025.520 - Special information sharing procedures to deter money laundering and terrorist activity for...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... to deter money laundering and terrorist activity for insurance companies. 1025.520 Section 1025.520... Procedures To Deter Money Laundering and Terrorist Activity § 1025.520 Special information sharing procedures to deter money laundering and terrorist activity for insurance companies. (a) Refer to § 1010.520...

  11. 31 CFR 1023.520 - Special information sharing procedures to deter money laundering and terrorist activity for...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... to deter money laundering and terrorist activity for brokers or dealers in securities. 1023.520... Special Information Sharing Procedures To Deter Money Laundering and Terrorist Activity § 1023.520 Special information sharing procedures to deter money laundering and terrorist activity for brokers or dealers...

  12. 31 CFR 1023.520 - Special information sharing procedures to deter money laundering and terrorist activity for...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... to deter money laundering and terrorist activity for brokers or dealers in securities. 1023.520... Special Information Sharing Procedures To Deter Money Laundering and Terrorist Activity § 1023.520 Special information sharing procedures to deter money laundering and terrorist activity for brokers or dealers...

  13. 31 CFR 1029.520 - Special information sharing procedures to deter money laundering and terrorist activity for loan...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... to deter money laundering and terrorist activity for loan or finance companies. 1029.520 Section 1029... Sharing Procedures To Deter Money Laundering and Terrorist Activity § 1029.520 Special information sharing procedures to deter money laundering and terrorist activity for loan or finance companies. (a) Refer...

  14. A procedure for seiche analysis with Bayesian information criterion

    NASA Astrophysics Data System (ADS)

    Aichi, Masaatsu

    2016-04-01

    Seiche is a standing wave in enclosed or semi-enclosed water body. Its amplitude irregularly changes in time due to weather condition etc. Then, extracting seiche signal is not easy by usual methods for time series analysis such as fast Fourier transform (FFT). In this study, a new method for time series analysis with Bayesian information criterion was developed to decompose seiche, tide, long-term trend and residual components from time series data of tide stations. The method was developed based on the maximum marginal likelihood estimation of tide amplitudes, seiche amplitude, and trend components. Seiche amplitude and trend components were assumed that they gradually changes as second derivative in time was close to zero. These assumptions were incorporated as prior distributions. The variances of prior distributions were estimated by minimizing Akaike-Bayes information criterion (ABIC). The frequency of seiche was determined by Newton method with initial guess by FFT. The accuracy of proposed method was checked by analyzing synthetic time series data composed of known components. The reproducibility of the original components was quite well. The proposed method was also applied to the actual time series data of sea level observed by tide station and the strain of coastal rock masses observed by fiber Bragg grating sensor in Aburatsubo Bay, Japan. The seiche in bay and its response of rock masses were successfully extracted.

  15. Blind Source Separation in CTBTO Expert Technical Analysis Procedures

    NASA Astrophysics Data System (ADS)

    Rozhkov, M.; Kitov, I.

    2014-12-01

    Blind Source Separation (BSS) is a widely used technique in many branches of data processing, but not in CTBT related applications so far. BSS methods are attractive for they use minimal a priori information about the signals they are dealing with. Homomorphic deconvolution and cepstral smoothing are probably the only methods not widely used in CTBT that can be attributed to given technique. However Expert Technical Analysis (ETA) in CTBTO may face the problems which cannot be resolved with only certified CTBTO applications and may demand specific techniques not presently used in a practice. The case which has to be considered within the ETA framework is the unambiguous separation of signals with close arrival times. There are two scenarios of interest: (1) separation of two almost co-located explosions conducted within fractions of seconds, and (2) extraction of explosion signals merged with wavetrains from strong earthquake. The importance of resolving the problem related to case 1 is obvious since it's connected with the correct explosion yield estimation. Case 2 is a well-known scenario of conducting clandestine nuclear tests. These cases can hardly be resolved with the conventional methods implemented at the International Data Centre, especially if the signals have close slowness and azimuth. The approach we use here is applying one of the blind source separation methods - Independent Component Analysis, implying non-Gaussianity of the signal's mixture underlying processes. We have tested the technique with synthetic data and Monte-Carlo modelling, and with the data from three DPRK tests and mining explosions conducted in Central Russia. The data was recorded by the International Monitoring System of CTBTO and by small-aperture seismic array Mikhnevo (MHVAR) operated by the Institute of Dynamics of Geospheres, Russian Academy of Science. The approach demonstrated good ability of separating sources conducted practically simultaneously and/or having close

  16. Radiological health risks to astronauts from space activities and medical procedures

    NASA Technical Reports Server (NTRS)

    Peterson, Leif E.; Nachtwey, D. Stuart

    1990-01-01

    Radiation protection standards for space activities differ substantially from those applied to terrestrial working situations. The levels of radiation and subsequent hazards to which space workers are exposed are quite unlike anything found on Earth. The new more highly refined system of risk management involves assessing the risk to each space worker from all sources of radiation (occupational and non-occupational) at the organ level. The risk coefficients were applied to previous space and medical exposures (diagnostic x ray and nuclear medicine procedures) in order to estimate the radiation-induced lifetime cancer incidence and mortality risk. At present, the risk from medical procedures when compared to space activities is 14 times higher for cancer incidence and 13 times higher for cancer mortality; however, this will change as the per capita dose during Space Station Freedom and interplanetary missions increases and more is known about the risks from exposure to high-LET radiation.

  17. Radiological health risks to astronauts from space activities and medical procedures

    SciTech Connect

    Paterson, L.E.; Nachtwey, D.S.

    1990-08-01

    Radiation protection standards for space activities differ substantially from those applied to terrestrial working situations. The levels of radiation and subsequent hazards to which space workers are exposed are quite unlike anything found on Earth. The new more highly refined system of risk management involves assessing the risk to each space worker from all sources of radiation (occupational and non-occupational) at the organ level. The risk coefficients were applied to previous space and medical exposures (diagnostic x ray and nuclear medicine procedures) in order to estimate the radiation-induced lifetime cancer incidence and mortality risk. At present, the risk from medical procedures when compared to space activities is 14 times higher for cancer incidence and 13 times higher for cancer mortality; however, this will change as the per capita dose during Space Station Freedom and interplanetary missions increases and more is known about the risks from exposure to high-LET radiation.

  18. Building America Performance Analysis Procedures for Existing Homes

    SciTech Connect

    Hendron, R.

    2006-05-01

    Because there are more than 101 million residential households in the United States today, it is not surprising that existing residential buildings represent an extremely large source of potential energy savings. Because thousands of these homes are renovated each year, Building America is investigating the best ways to make existing homes more energy-efficient, based on lessons learned from research in new homes. The Building America program is aiming for a 20%-30% reduction in energy use in existing homes by 2020. The strategy for the existing homes project of Building America is to establish technology pathways that reduce energy consumption cost-effectively in American homes. The existing buildings project focuses on finding ways to adapt the results from the new homes research to retrofit applications in existing homes. Research activities include a combination of computer modeling, field demonstrations, and long-term monitoring to support the development of integrated approaches to reduce energy use in existing residential buildings. Analytical tools are being developed to guide designers and builders in selecting the best approaches for each application. Also, DOE partners with the U.S. Environmental Protection Agency (EPA) to increase energy efficiency in existing homes through the Home Performance with ENERGY STAR program.

  19. neutron activation analysis using thermochromatography. III. analysis of samples of biological origin

    SciTech Connect

    Sattarov, G.; Davydov, A.V.; Khamatov, S.; Kist, A.A.

    1986-07-01

    The use of gas thermochromatography (GTC) in the radioactivation analysis of biological materials is discussed. A group separation of a number of highly volatile elements from sodium and bromine radionuclides has been achieved. The limit of detection of the elements by INAA and neutron activation analysis was estimated using GTC. The advantages of the procedure and the analytical parameters are discussed.

  20. Studies on extraction procedure and antioxidative activity of phlorotannins from Sargassum kjellmanianum

    NASA Astrophysics Data System (ADS)

    Yan, Xiao-Jun; Li, Xian-Cui; Fan, Xiao; Zhou, Cheng-Xu

    1997-03-01

    Known only in phaeophyceae, phlorotannins (brown algal polyphenols) are natural products with potential uses in pharmacology. This study yielded an extraction procedure to obtain high purity, high molecular weight phlorotannins from Sargassum kjellmanianum and revealed the characteristics of their infrared and flourescence spectra. The antioxidative activity of phlorotannins, which was about 2.6 times as strong as that of 0.02% BHT (tert-butyl-4-hydroxytoluene), showed potential for preventing oil rancidification.

  1. Qualitative content analysis in nursing research: concepts, procedures and measures to achieve trustworthiness.

    PubMed

    Graneheim, U H; Lundman, B

    2004-02-01

    Qualitative content analysis as described in published literature shows conflicting opinions and unsolved issues regarding meaning and use of concepts, procedures and interpretation. This paper provides an overview of important concepts (manifest and latent content, unit of analysis, meaning unit, condensation, abstraction, content area, code, category and theme) related to qualitative content analysis; illustrates the use of concepts related to the research procedure; and proposes measures to achieve trustworthiness (credibility, dependability and transferability) throughout the steps of the research procedure. Interpretation in qualitative content analysis is discussed in light of Watzlawick et al.'s [Pragmatics of Human Communication. A Study of Interactional Patterns, Pathologies and Paradoxes. W.W. Norton & Company, New York, London] theory of communication.

  2. CONSIDERATIONS FOR THE TREATMENT OF COMPUTERIZED PROCEDURES IN HUMAN RELIABILITY ANALYSIS

    SciTech Connect

    Ronald L. Boring; David I. Gertman

    2012-07-01

    Computerized procedures (CPs) are an emerging technology within nuclear power plant control rooms. While CPs have been implemented internationally in advanced control rooms, to date no US nuclear power plant has implemented CPs in its main control room. Yet, CPs are a reality of new plant builds and are an area of considerable interest to existing plants, which see advantages in terms of easier records management by omitting the need for updating hardcopy procedures. The overall intent of this paper is to provide a characterization of human reliability analysis (HRA) issues for computerized procedures. It is beyond the scope of this document to propose a new HRA approach or to recommend specific methods or refinements to those methods. Rather, this paper serves as a review of current HRA as it may be used for the analysis and review of computerized procedures.

  3. HUMAN RELIABILITY ANALYSIS FOR COMPUTERIZED PROCEDURES, PART TWO: APPLICABILITY OF CURRENT METHODS

    SciTech Connect

    Ronald L. Boring; David I. Gertman

    2012-10-01

    Computerized procedures (CPs) are an emerging technology within nuclear power plant control rooms. While CPs have been implemented internationally in advanced control rooms, to date no U.S. nuclear power plant has implemented CPs in its main control room. Yet, CPs are a reality of new plant builds and are an area of considerable interest to existing plants, which see advantages in terms of easier records management by omitting the need for updating hardcopy procedures. The overall intent of this paper is to provide a characterization of human reliability analysis (HRA) issues for computerized procedures. It is beyond the scope of this document to propose a new HRA approach or to recommend specific methods or refinements to those methods. Rather, this paper serves as a review of current HRA as it may be used for the analysis and review of computerized procedures.

  4. Procedure and analysis reports in support of the hollow clay tile wall testing program

    SciTech Connect

    Not Available

    1990-11-01

    The following are included that were generated in the research program on the structural behavior of clay tile walls: test procedure for out-of-plane full scale static testing of a hollow clay tile wall panel, ultimate strength calculations for out-of-plane air bag tests, test procedure for in-plane infilled frame static testing of a hollow clay tile wall panel, in-plane infilled frame static testing of a hollow clay tile wall panel (calculations), test procedure for in-plane masonry static testing of a hollow clay tile wall panel, numerical analysis for in-plane behavior of infilled frames, in-plane analysis of hollow clay tile infilled frames (effect of wall-frame interface and connection rigidity), test procedure for in-plane masonry biaxial compressiont esting of a hollow clay tile wall panel, and outbuilding test program. (DLC)

  5. Procedure and analysis reports in support of the hollow clay tile wall testing program. Revision 1

    SciTech Connect

    Not Available

    1990-11-01

    The following are included that were generated in the research program on the structural behavior of clay tile walls: test procedure for out-of-plane full scale static testing of a hollow clay tile wall panel, ultimate strength calculations for out-of-plane air bag tests, test procedure for in-plane infilled frame static testing of a hollow clay tile wall panel, in-plane infilled frame static testing of a hollow clay tile wall panel (calculations), test procedure for in-plane masonry static testing of a hollow clay tile wall panel, numerical analysis for in-plane behavior of infilled frames, in-plane analysis of hollow clay tile infilled frames (effect of wall-frame interface and connection rigidity), test procedure for in-plane masonry biaxial compressiont esting of a hollow clay tile wall panel, and outbuilding test program. (DLC)

  6. Documentation for a Structural Optimization Procedure Developed Using the Engineering Analysis Language (EAL)

    NASA Technical Reports Server (NTRS)

    Martin, Carl J., Jr.

    1996-01-01

    This report describes a structural optimization procedure developed for use with the Engineering Analysis Language (EAL) finite element analysis system. The procedure is written primarily in the EAL command language. Three external processors which are written in FORTRAN generate equivalent stiffnesses and evaluate stress and local buckling constraints for the sections. Several built-up structural sections were coded into the design procedures. These structural sections were selected for use in aircraft design, but are suitable for other applications. Sensitivity calculations use the semi-analytic method, and an extensive effort has been made to increase the execution speed and reduce the storage requirements. There is also an approximate sensitivity update method included which can significantly reduce computational time. The optimization is performed by an implementation of the MINOS V5.4 linear programming routine in a sequential liner programming procedure.

  7. Frontal midline theta rhythm is correlated with cardiac autonomic activities during the performance of an attention demanding meditation procedure.

    PubMed

    Kubota, Y; Sato, W; Toichi, M; Murai, T; Okada, T; Hayashi, A; Sengoku, A

    2001-04-01

    Frontal midline theta rhythm (Fm theta), recognized as distinct theta activity on EEG in the frontal midline area, reflects mental concentration as well as meditative state or relief from anxiety. Attentional network in anterior frontal lobes including anterior cingulate cortex is suspected to be the generator of this activity, and the regulative function of the frontal neural network over autonomic nervous system (ANS) during cognitive process is suggested. However no studies have examined peripheral autonomic activities during Fm theta induction, and interaction of central and peripheral mechanism associated with Fm theta remains unclear. In the present study, a standard procedure of Zen meditation requiring sustained attention and breath control was employed as the task to provoke Fm theta, and simultaneous EEG and ECG recordings were performed. For the subjects in which Fm theta activities were provoked (six men, six women, 48% of the total subjects), peripheral autonomic activities were evaluated during the appearance of Fm theta as well as during control periods. Successive inter-beat intervals were measured from the ECG, and a recently developed method of analysis by Toichi et al. (J. Auton. Nerv. Syst. 62 (1997) 79-84) based on heart rate variability was used to assess cardiac sympathetic and parasympathetic functions separately. Both sympathetic and parasympathetic indices were increased during the appearance of Fm theta compared with control periods. Theta band activities in the frontal area were correlated negatively with sympathetic activation. The results suggest a close relationship between cardiac autonomic function and activity of medial frontal neural circuitry.

  8. A finite element thermal analysis procedure for several temperature-dependent parameters

    NASA Technical Reports Server (NTRS)

    Thornton, E. A.; Wieting, A. R.

    1978-01-01

    A finite-element thermal analysis procedure for elements with several temperature-dependent thermal parameters is presented. The procedure, based on an application of the Newton-Raphson iteration technique, is formulated by resolving element matrices into component matrices, one component for each thermal parameter. Component conductance matrices are evaluated by assuming constant thermal parameters within an element and are computed once per unit thermal parameter. Significant savings in computer time result from the unit thermal parameter concept. The solution procedure applied to a convectively cooled structure with significantly varying thermal parameters converged in four iterations.

  9. 31 CFR 1027.520 - Special information sharing procedures to deter money laundering and terrorist activity for...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... to deter money laundering and terrorist activity for dealers in precious metals, precious stones, or... METALS, PRECIOUS STONES, OR JEWELS Special Information Sharing Procedures To Deter Money Laundering and Terrorist Activity § 1027.520 Special information sharing procedures to deter money laundering and...

  10. 31 CFR 1027.520 - Special information sharing procedures to deter money laundering and terrorist activity for...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... to deter money laundering and terrorist activity for dealers in precious metals, precious stones, or... METALS, PRECIOUS STONES, OR JEWELS Special Information Sharing Procedures To Deter Money Laundering and Terrorist Activity § 1027.520 Special information sharing procedures to deter money laundering and...

  11. 31 CFR 1027.520 - Special information sharing procedures to deter money laundering and terrorist activity for...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... to deter money laundering and terrorist activity for dealers in precious metals, precious stones, or... METALS, PRECIOUS STONES, OR JEWELS Special Information Sharing Procedures To Deter Money Laundering and Terrorist Activity § 1027.520 Special information sharing procedures to deter money laundering and...

  12. Multicriteria decision analysis in ranking of analytical procedures for aldrin determination in water.

    PubMed

    Tobiszewski, Marek; Orłowski, Aleksander

    2015-03-27

    The study presents the possibility of multi-criteria decision analysis (MCDA) application when choosing analytical procedures with low environmental impact. A type of MCDA, Preference Ranking Organization Method for Enrichment Evaluations (PROMETHEE), was chosen as versatile tool that meets all the analytical chemists--decision makers requirements. Twenty five analytical procedures for aldrin determination in water samples (as an example) were selected as input alternatives to MCDA analysis. Nine different criteria describing the alternatives were chosen from different groups--metrological, economical and the most importantly--environmental impact. The weights for each criterion were obtained from questionnaires that were sent to experts, giving three different scenarios for MCDA results. The results of analysis show that PROMETHEE is very promising tool to choose the analytical procedure with respect to its greenness. The rankings for all three scenarios placed solid phase microextraction and liquid phase microextraction--based procedures high, while liquid-liquid extraction, solid phase extraction and stir bar sorptive extraction--based procedures were placed low in the ranking. The results show that although some of the experts do not intentionally choose green analytical chemistry procedures, their MCDA choice is in accordance with green chemistry principles. The PROMETHEE ranking results were compared with more widely accepted green analytical chemistry tools--NEMI and Eco-Scale. As PROMETHEE involved more different factors than NEMI, the assessment results were only weakly correlated. Oppositely, the results of Eco-Scale assessment were well-correlated as both methodologies involved similar criteria of assessment.

  13. Activity based costing of diagnostic procedures at a nuclear medicine center of a tertiary care hospital

    PubMed Central

    Hada, Mahesh Singh; Chakravarty, Abhijit; Mukherjee, Partha

    2014-01-01

    Context: Escalating health care expenses pose a new challenge to the health care environment of becoming more cost-effective. There is an urgent need for more accurate data on the costs of health care procedures. Demographic changes, changing morbidity profile, and the rising impact of noncommunicable diseases are emphasizing the role of nuclear medicine (NM) in the future health care environment. However, the impact of emerging disease load and stagnant resource availability needs to be balanced by a strategic drive towards optimal utilization of available healthcare resources. Aim: The aim was to ascertain the cost of diagnostic procedures conducted at the NM Department of a tertiary health care facility by employing activity based costing (ABC) method. Materials and Methods: A descriptive cross-sectional study was carried out over a period of 1 year. ABC methodology was utilized for ascertaining unit cost of different diagnostic procedures and such costs were compared with prevalent market rates for estimating cost effectiveness of the department being studied. Results: The cost per unit procedure for various procedures varied from Rs. 869 (USD 14.48) for a thyroid scan to Rs. 11230 (USD 187.16) for a meta-iodo-benzyl-guanidine (MIBG) scan, the most cost-effective investigations being the stress thallium, technetium-99 m myocardial perfusion imaging (MPI) and MIBG scan. The costs obtained from this study were observed to be competitive when compared to prevalent market rates. Conclusion: ABC methodology provides precise costing inputs and should be used for all future costing studies in NM Departments. PMID:25400363

  14. Mechanism of failure of the Cabrol procedure: A computational fluid dynamic analysis.

    PubMed

    Poullis, M; Pullan, M

    2015-12-01

    Sudden failure of the Cabrol graft is common and frequently fatal. We utilised the technique of computational fluid dynamic (CFD) analysis to evaluate the mechanism of failure and potentially improve on the design of the Cabrol procedure. CFD analysis of the classic Cabrol procedure and a number of its variants was performed. Results from this analysis was utilised to generate further improved geometric options for the Cabrol procedure. These were also subjected to CFD analysis. All current Cabrol and variations of the Cabrol procedure are predicated by CFD analysis to be prone to graft thrombosis, secondary to stasis around the right coronary artery button. The right coronary artery flow characteristics were found to be the dominant reason for Cabrol graft failure. A simple modification of the Cabrol geometry is predicated to virtually eliminate any areas of blood stasis, and graft failure. Modification of the Cabrol graft geometry, due to CFD analysis may help reduce the incidence of graft thrombosis. A C shaped Cabrol graft with the right coronary button anastomosed to its side along its course from the aorta to the left coronary button is predicted to have the least thrombotic tendency. Clinical correlation is needed. PMID:26508722

  15. Mechanism of failure of the Cabrol procedure: A computational fluid dynamic analysis.

    PubMed

    Poullis, M; Pullan, M

    2015-12-01

    Sudden failure of the Cabrol graft is common and frequently fatal. We utilised the technique of computational fluid dynamic (CFD) analysis to evaluate the mechanism of failure and potentially improve on the design of the Cabrol procedure. CFD analysis of the classic Cabrol procedure and a number of its variants was performed. Results from this analysis was utilised to generate further improved geometric options for the Cabrol procedure. These were also subjected to CFD analysis. All current Cabrol and variations of the Cabrol procedure are predicated by CFD analysis to be prone to graft thrombosis, secondary to stasis around the right coronary artery button. The right coronary artery flow characteristics were found to be the dominant reason for Cabrol graft failure. A simple modification of the Cabrol geometry is predicated to virtually eliminate any areas of blood stasis, and graft failure. Modification of the Cabrol graft geometry, due to CFD analysis may help reduce the incidence of graft thrombosis. A C shaped Cabrol graft with the right coronary button anastomosed to its side along its course from the aorta to the left coronary button is predicted to have the least thrombotic tendency. Clinical correlation is needed.

  16. neutron activation analysis using thermochromatography. II. thermochromatographic separation of elements in the analysis of geological samples

    SciTech Connect

    Sattarov, G.; Davydov, A.V.; Khamatov, S.; Kist, A.A.

    1986-07-01

    The use of gas thermochromatography (GTC) in the radioactivation analysis of difficulty soluble samples with a strongly activating substrate is discussed. The effect of sample coarseness and ore type on the rate of extraction of gold and accompanying elements was studied. The limits of detection of 22 elements were compared using neutron activation analysis with GTC and INAA. The analytical parameters of the procedure were estimated.

  17. Roughness Analysis on Composite Materials (Microfilled, Nanofilled and Silorane) After Different Finishing and Polishing Procedures

    PubMed Central

    Pettini, Francesco; Corsalini, Massimo; Savino, Maria Grazia; Stefanachi, Gianluca; Venere, Daniela Di; Pappalettere, Carmine; Monno, Giuseppe; Boccaccio, Antonio

    2015-01-01

    The finishing and polishing of composite materials affect the restoration lifespan. The market shows a variety of finishing and polishing procedures and the choice among them is conditioned by different factors such as the resulting surface roughness. In the present study, 156 samples were realized with three composite materials, -microfilled, nanofilled and silorane-, and treated with different finishing and polishing procedures. Profilometric analyses were carried out on the samples’ surface, the measured roughness values were submitted to statistical analysis. A complete factorial plan was drawn up and two-way analysis of variance (ANOVA) was carried out to investigate whether the following factors affect the values of roughness: (i) material; (ii) polishing/finishing procedure. Tukey post-hoc test was also conducted to evaluate any statistically significant differences between the material/procedure combinations. The results show that the tested materials do not affect the resulting surface quality but roughness values depend on the finishing/polishing procedure adopted. The procedures that involve: (a) the finishing with medium Sof-Lex discs and (b) the finishing with two tungsten carbide multi-blade milling cutters Q series and UF series are those that allow the lowest values of roughness to be obtained. PMID:26734113

  18. Roughness Analysis on Composite Materials (Microfilled, Nanofilled and Silorane) After Different Finishing and Polishing Procedures.

    PubMed

    Pettini, Francesco; Corsalini, Massimo; Savino, Maria Grazia; Stefanachi, Gianluca; Venere, Daniela Di; Pappalettere, Carmine; Monno, Giuseppe; Boccaccio, Antonio

    2015-01-01

    The finishing and polishing of composite materials affect the restoration lifespan. The market shows a variety of finishing and polishing procedures and the choice among them is conditioned by different factors such as the resulting surface roughness. In the present study, 156 samples were realized with three composite materials, -microfilled, nanofilled and silorane-, and treated with different finishing and polishing procedures. Profilometric analyses were carried out on the samples' surface, the measured roughness values were submitted to statistical analysis. A complete factorial plan was drawn up and two-way analysis of variance (ANOVA) was carried out to investigate whether the following factors affect the values of roughness: (i) material; (ii) polishing/finishing procedure. Tukey post-hoc test was also conducted to evaluate any statistically significant differences between the material/procedure combinations. The results show that the tested materials do not affect the resulting surface quality but roughness values depend on the finishing/polishing procedure adopted. The procedures that involve: (a) the finishing with medium Sof-Lex discs and (b) the finishing with two tungsten carbide multi-blade milling cutters Q series and UF series are those that allow the lowest values of roughness to be obtained. PMID:26734113

  19. Weather analysis and interpretation procedures developed for the US/Canada wheat and barley exploratory experiment

    NASA Technical Reports Server (NTRS)

    Trenchard, M. H. (Principal Investigator)

    1980-01-01

    Procedures and techniques for providing analyses of meteorological conditions at segments during the growing season were developed for the U.S./Canada Wheat and Barley Exploratory Experiment. The main product and analysis tool is the segment-level climagraph which depicts temporally meteorological variables for the current year compared with climatological normals. The variable values for the segment are estimates derived through objective analysis of values obtained at first-order station in the region. The procedures and products documented represent a baseline for future Foreign Commodity Production Forecasting experiments.

  20. Power analysis for multivariate and repeated measures designs: a flexible approach using the SPSS MANOVA procedure.

    PubMed

    D'Amico, E J; Neilands, T B; Zambarano, R

    2001-11-01

    Although power analysis is an important component in the planning and implementation of research designs, it is often ignored. Computer programs for performing power analysis are available, but most have limitations, particularly for complex multivariate designs. An SPSS procedure is presented that can be used for calculating power for univariate, multivariate, and repeated measures models with and without time-varying and time-constant covariates. Three examples provide a framework for calculating power via this method: an ANCOVA, a MANOVA, and a repeated measures ANOVA with two or more groups. The benefits and limitations of this procedure are discussed. PMID:11816450

  1. A facile reflux procedure to increase active surface sites form highly active and durable supported palladium@platinum bimetallic nanodendrites

    NASA Astrophysics Data System (ADS)

    Wang, Qin; Li, Yingjun; Liu, Baocang; Xu, Guangran; Zhang, Geng; Zhao, Qi; Zhang, Jun

    2015-11-01

    A series of well-dispersed bimetallic Pd@Pt nanodendrites uniformly supported on XC-72 carbon black are fabricated by using different capping agents. These capping agents are essential for the branched morphology control. However, the surfactant adsorbed on the nanodendrites surface blocks the access of reactant molecules to the active surface sites, and the catalytic activities of these bimetallic nanodendrites are significantly restricted. Herein, a facile reflux procedure to effectively remove the capping agent molecules without significantly affecting their sizes is reported for activating supported nanocatalysts. More significantly, the structure and morphology of the nanodendrites can also be retained, enhancing the numbers of active surface sites, catalytic activity and stability toward methanol and ethanol electro-oxidation reactions. The as-obtained hot water reflux-treated Pd@Pt/C catalyst manifests superior catalytic activity and stability both in terms of surface and mass specific activities, as compared to the untreated catalysts and the commercial Pt/C and Pd/C catalysts. We anticipate that this effective and facile removal method has more general applicability to highly active nanocatalysts prepared with various surfactants, and should lead to improvements in environmental protection and energy production.

  2. US--ITER activation analysis

    SciTech Connect

    Attaya, H.; Gohar, Y.; Smith, D.

    1990-09-01

    Activation analysis has been made for the US ITER design. The radioactivity and the decay heat have been calculated, during operation and after shutdown for the two ITER phases, the Physics Phase and the Technology Phase. The Physics Phase operates about 24 full power days (FPDs) at fusion power level of 1100 MW and the Technology Phase has 860 MW fusion power and operates for about 1360 FPDs. The point-wise gamma sources have been calculated everywhere in the reactor at several times after shutdown of the two phases and are then used to calculate the biological dose everywhere in the reactor. Activation calculations have been made also for ITER divertor. The results are presented for different continuous operation times and for only one pulse. The effect of the pulsed operation on the radioactivity is analyzed. 6 refs., 12 figs., 1 tab.

  3. Assessing the effect of data pretreatment procedures for principal components analysis of chromatographic data.

    PubMed

    McIlroy, John W; Smith, Ruth Waddell; McGuffin, Victoria L

    2015-12-01

    Following publication of the National Academy of Sciences report "Strengthening Forensic Science in the United States: A Path Forward", there has been increasing interest in the application of multivariate statistical procedures for the evaluation of forensic evidence. However, prior to statistical analysis, variance from sources other than the sample must be minimized through application of data pretreatment procedures. This is necessary to ensure that subsequent statistical analysis of the data provides meaningful results. The purpose of this work was to evaluate the effect of pretreatment procedures on multivariate statistical analysis of chromatographic data obtained for a reference set of diesel fuels. Diesel was selected due to its chemical complexity and forensic relevance, both for fire debris and environmental forensic applications. Principal components analysis (PCA) was applied to the untreated chromatograms to assess association of replicates and discrimination among the different diesel samples. The chromatograms were then pretreated by sequentially applying the following procedures: background correction, smoothing, retention-time alignment, and normalization. The effect of each procedure on association and discrimination was evaluated based on the association of replicates in the PCA scores plot. For these data, background correction and smoothing offered minimal improvement, whereas alignment and normalization offered the greatest improvement in the association of replicates and discrimination among highly similar samples. Further, prior to pretreatment, the first principal component accounted for only non-sample sources of variance. Following pretreatment, these sources were minimized and the first principal component accounted for significant chemical differences among the diesel samples. These results highlight the need for pretreatment procedures and provide a metric to assess the effect of pretreatment on subsequent multivariate statistical

  4. Assessing the effect of data pretreatment procedures for principal components analysis of chromatographic data.

    PubMed

    McIlroy, John W; Smith, Ruth Waddell; McGuffin, Victoria L

    2015-12-01

    Following publication of the National Academy of Sciences report "Strengthening Forensic Science in the United States: A Path Forward", there has been increasing interest in the application of multivariate statistical procedures for the evaluation of forensic evidence. However, prior to statistical analysis, variance from sources other than the sample must be minimized through application of data pretreatment procedures. This is necessary to ensure that subsequent statistical analysis of the data provides meaningful results. The purpose of this work was to evaluate the effect of pretreatment procedures on multivariate statistical analysis of chromatographic data obtained for a reference set of diesel fuels. Diesel was selected due to its chemical complexity and forensic relevance, both for fire debris and environmental forensic applications. Principal components analysis (PCA) was applied to the untreated chromatograms to assess association of replicates and discrimination among the different diesel samples. The chromatograms were then pretreated by sequentially applying the following procedures: background correction, smoothing, retention-time alignment, and normalization. The effect of each procedure on association and discrimination was evaluated based on the association of replicates in the PCA scores plot. For these data, background correction and smoothing offered minimal improvement, whereas alignment and normalization offered the greatest improvement in the association of replicates and discrimination among highly similar samples. Further, prior to pretreatment, the first principal component accounted for only non-sample sources of variance. Following pretreatment, these sources were minimized and the first principal component accounted for significant chemical differences among the diesel samples. These results highlight the need for pretreatment procedures and provide a metric to assess the effect of pretreatment on subsequent multivariate statistical

  5. Prompt-gamma activation analysis

    SciTech Connect

    Lindstrom, R.M.

    1993-01-01

    A permenent, full-time instrument for prompt-gamma activation analysis is nearing completion as part of the Cold Neutron Research Facility (CNRF). The design of the analytical system has been optimized for high gamma detection efficiency and low background, particularly for hydrogen. Because of the purity of the neutron beam, shielding requirements are modest and the scatter-capture background is low. As a result of a compact sample-detector geometry, the sensitivity (counting rate per gram of analyte) is a factor of four better than the existing Maryland-NIST thermal-neutron instrument at the reactor. Hydrogen backgrounds of a few micrograms have already been achieved, which promises to be of value in numerous applications where quantitative nondestructive analysis of small quantities of hydrogen in materials is necessary.

  6. Measurement of activity coefficients of mixtures by head-space gas chromatography: general procedure.

    PubMed

    Luis, Patricia; Wouters, Christine; Van der Bruggen, Bart; Sandler, Stanley I

    2013-08-01

    Head-space gas chromatography (HS-GC) is an applicable method to perform vapor-liquid equilibrium measurements and determine activity coefficients. However, the reproducibility of the data may be conditioned by the experimental procedure concerning to the automated pressure-balanced system. The study developed in this work shows that a minimum volume of liquid in the vial is necessary to ensure the reliability of the activity coefficients since it may become a parameter that influences the magnitude of the peak areas: the helium introduced during the pressurization step may produce significant variations of the results when too small volume of liquid is selected. The minimum volume required should thus be evaluated prior to obtain experimentally the concentration in the vapor phase and the activity coefficients. In this work, the mixture acetonitrile-toluene is taken as example, requiring a sample volume of more than 5mL (about more than 25% of the vial volume). The vapor-liquid equilibrium and activity coefficients of mixtures at different concentrations (0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9 molar fraction) and four temperatures (35, 45, 55 and 70°C) have been determined. Relative standard deviations (RSD) lower than 5% have been obtained, indicating the good reproducibility of the method when a sample volume larger than 5mL is used. Finally, a general procedure to measure activity coefficients by means of pressure-balanced head-space gas chromatography is proposed. PMID:23809803

  7. Procedures for minimizing the effects of high solar activity on satellite tracking and ephemeris generation

    NASA Technical Reports Server (NTRS)

    Bredvik, Gordon D.

    1990-01-01

    We are currently experiencing a period of high solar radiation combined with wide short-term fluctuations in the radiation. The short-term fluctuations, especially when combined with highly energetic solar flares, can adversely affect the mission of U.S. Space Command's Space Surveillance Center (SSC) which catalogs and tracks the satellites in orbit around the Earth. Rapidly increasing levels of solar electromagnetic and/or particle radiation (solar wind) causes atmospheric warming, which, in turn, causes the upper-most portions of the atmosphere to expand outward, into the regime of low altitude satellites. The increased drag on satellites from this expansion can cause large, unmodeled, in-track displacements, thus undermining the SSC's ability to track and predict satellite position. On 13 March 1989, high solar radiation levels, combined with a high-energy solar flare, caused an exceptional amount of short-term atmospheric warming. The SSC temporarily lost track of over 1300 low altitude satellites--nearly half of the low altitude satellite population. Observational data on satellites that became lost during the days following the 13 March 'solar event' was analyzed and compared with the satellites' last element set prior to the event (referred to as a geomagnetic storm because of the large increase in magnetic flux in the upper atmosphere). The analysis led to a set of procedures for reducing the impact of future geomagnetic storms. These procedures adjust selected software limit parameters in the differential correction of element sets and in the observation association process and must be manually initiated at the onset of a geomagnetic storm. Sensor tasking procedures must be adjusted to ensure that a minimum of four observations per day are received for low altitude satellites. These procedures have been implemented and, thus far, appear to be successful in minimizing the effect of subsequent geomagnetic storms on satellite tracking and ephemeris

  8. An Analysis of Public Art on University Campuses: Policies, Procedures, and Best Practices

    ERIC Educational Resources Information Center

    Grenier, Michael Robert

    2009-01-01

    This study investigated the policies, procedures, and practices of public art programs on the campuses of research institutions with very high activity as defined by the Carnegie Classification. From this particular type of institution, 55 of the 96 public art administrators provided their opinions, attitudes, and behaviors as part of the "Public…

  9. Classical Item Analysis Using Latent Variable Modeling: A Note on a Direct Evaluation Procedure

    ERIC Educational Resources Information Center

    Raykov, Tenko; Marcoulides, George A.

    2011-01-01

    A directly applicable latent variable modeling procedure for classical item analysis is outlined. The method allows one to point and interval estimate item difficulty, item correlations, and item-total correlations for composites consisting of categorical items. The approach is readily employed in empirical research and as a by-product permits…

  10. 76 FR 78015 - Revised Analysis and Mapping Procedures for Non-Accredited Levees

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-15

    ... SECURITY Federal Emergency Management Agency Revised Analysis and Mapping Procedures for Non-Accredited Levees AGENCY: Federal Emergency Management Agency, DHS. ACTION: Notice. SUMMARY: The Federal Emergency... Division, Office of Chief Counsel, Federal Emergency Management Agency, Room 835, 500 C Street...

  11. A Procedure for the Computerized Analysis of Cleft Palate Speech Transcription

    ERIC Educational Resources Information Center

    Fitzsimons, David A.; Jones, David L.; Barton, Belinda; North, Kathryn N.

    2012-01-01

    The phonetic symbols used by speech-language pathologists to transcribe speech contain underlying hexadecimal values used by computers to correctly display and process transcription data. This study aimed to develop a procedure to utilise these values as the basis for subsequent computerized analysis of cleft palate speech. A computer keyboard…

  12. Isolating the Effects of Training Using Simple Regression Analysis: An Example of the Procedure.

    ERIC Educational Resources Information Center

    Waugh, C. Keith

    This paper provides a case example of simple regression analysis, a forecasting procedure used to isolate the effects of training from an identified extraneous variable. This case example focuses on results of a three-day sales training program to improve bank loan officers' knowledge, skill-level, and attitude regarding solicitation and sale of…

  13. Alternative Methods for Calculating Intercoder Reliability in Content Analysis: Kappa, Weighted Kappa and Agreement Charts Procedures.

    ERIC Educational Resources Information Center

    Kang, Namjun

    If content analysis is to satisfy the requirement of objectivity, measures and procedures must be reliable. Reliability is usually measured by the proportion of agreement of all categories identically coded by different coders. For such data to be empirically meaningful, a high degree of inter-coder reliability must be demonstrated. Researchers in…

  14. Analysis of Distribution Procedures Used by States to Distribute Federal Funds for Vocational Education.

    ERIC Educational Resources Information Center

    Benson, Charles S.; And Others

    An analysis of the procedures states have adopted to distribute federal funds for vocational education under the 1976 Amendments to the Vocational Education Act shows that there is widespread confusion and variation among the states. While the Act specifies that a formula must be used for distribution of funds, the exact criteria for determining…

  15. 46 CFR 4.06-50 - Specimen analysis and follow-up procedures.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... sent to the Medical Review Officer meeting the requirements of 49 CFR 40.121, as designated by the... required by 49 CFR part 40, subpart G, and submit his or her findings to the marine employer. Blood test... 46 Shipping 1 2011-10-01 2011-10-01 false Specimen analysis and follow-up procedures....

  16. 46 CFR 4.06-50 - Specimen analysis and follow-up procedures.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... sent to the Medical Review Officer meeting the requirements of 49 CFR 40.121, as designated by the... required by 49 CFR part 40, subpart G, and submit his or her findings to the marine employer. Blood test... 46 Shipping 1 2013-10-01 2013-10-01 false Specimen analysis and follow-up procedures....

  17. Gel'fand-Tsetlin Procedure for the Construction of Orthogonal Bases in Hermitean Clifford Analysis

    NASA Astrophysics Data System (ADS)

    Brackx, Fred; De Schepper, Hennie; Lávička, Roman; Souček, Vladimír

    2010-09-01

    In this note, we describe the Gel'fand-Tsetlin procedure for the construction of an orthogonal basis in spaces of Hermitean monogenic polynomials of a fixed bidegree. The algorithm is based on the Cauchy-Kovalevskaya extension theorem and the Fischer decomposition in Hermitean Clifford analysis.

  18. Working with stories in nursing research: procedures used in narrative analysis.

    PubMed

    Kelly, Teresa; Howie, Linsey

    2007-04-01

    This paper describes the procedures undertaken in a qualitative study that used nurses' stories to examine the influence of Gestalt therapy training on the professional practice of psychiatric nurses. The paper places narrative research methodologies within a nursing context before introducing narrative inquiry, specifically narrative analysis methodology. Procedures used in the study are subsequently described in sufficient detail to serve as a guide for novice researchers interested in undertaking a narrative analysis study. An exemplar of a storied outcome is provided to evidence the product of the narrative analysis research process. The paper concludes with reflections on the importance of articulating the process of narrative analysis as a means of developing interest and competence in narrative research, and using nurses' stories as a means of exploring, understanding, and communicating nursing practice. PMID:17348965

  19. A novel procedure of quantitation of virus based on microflow cytometry analysis.

    PubMed

    Vazquez, Diego; López-Vázquez, Carmen; Cutrín, Juan Manuel; Dopazo, Carlos P

    2016-03-01

    The accurate and fast titration of viruses is a critical step in research laboratories and biotechnology industries. Different approaches are commonly applied which either are time consuming (like the plaque and endpoint dilution assays) or do not ensure quantification of only infective particles (like quantitative real-time PCR). In the last decade, a methodology based on the analysis of infected cells by flow cytometry and fluorescence-activated cell sorting (FACS) has been reported as a fast and reliable test for the titration of some viruses. However, this technology needs expensive equipment and expert technicians to operate it. Recently, the "lab on a chip" integrated devices have brought about the miniaturization of this equipment, turning this technology into an affordable and easy-to-use alternative to traditional flow cytometry. In the present study, we have designed a microflow cytometry (μFC) procedure for the quantitation of viruses, using the infectious pancreatic necrosis virus (IPNV) as a model. The optimization of conditions and validation of the method are reported here.

  20. The LET Procedure for Prosthetic Myocontrol: Towards Multi-DOF Control Using Single-DOF Activations.

    PubMed

    Nowak, Markus; Castellini, Claudio

    2016-01-01

    Simultaneous and proportional myocontrol of dexterous hand prostheses is to a large extent still an open problem. With the advent of commercially and clinically available multi-fingered hand prostheses there are now more independent degrees of freedom (DOFs) in prostheses than can be effectively controlled using surface electromyography (sEMG), the current standard human-machine interface for hand amputees. In particular, it is uncertain, whether several DOFs can be controlled simultaneously and proportionally by exclusively calibrating the intended activation of single DOFs. The problem is currently solved by training on all required combinations. However, as the number of available DOFs grows, this approach becomes overly long and poses a high cognitive burden on the subject. In this paper we present a novel approach to overcome this problem. Multi-DOF activations are artificially modelled from single-DOF ones using a simple linear combination of sEMG signals, which are then added to the training set. This procedure, which we named LET (Linearly Enhanced Training), provides an augmented data set to any machine-learning-based intent detection system. In two experiments involving intact subjects, one offline and one online, we trained a standard machine learning approach using the full data set containing single- and multi-DOF activations as well as using the LET-augmented data set in order to evaluate the performance of the LET procedure. The results indicate that the machine trained on the latter data set obtains worse results in the offline experiment compared to the full data set. However, the online implementation enables the user to perform multi-DOF tasks with almost the same precision as single-DOF tasks without the need of explicitly training multi-DOF activations. Moreover, the parameters involved in the system are statistically uniform across subjects. PMID:27606674

  1. The LET Procedure for Prosthetic Myocontrol: Towards Multi-DOF Control Using Single-DOF Activations

    PubMed Central

    Nowak, Markus; Castellini, Claudio

    2016-01-01

    Simultaneous and proportional myocontrol of dexterous hand prostheses is to a large extent still an open problem. With the advent of commercially and clinically available multi-fingered hand prostheses there are now more independent degrees of freedom (DOFs) in prostheses than can be effectively controlled using surface electromyography (sEMG), the current standard human-machine interface for hand amputees. In particular, it is uncertain, whether several DOFs can be controlled simultaneously and proportionally by exclusively calibrating the intended activation of single DOFs. The problem is currently solved by training on all required combinations. However, as the number of available DOFs grows, this approach becomes overly long and poses a high cognitive burden on the subject. In this paper we present a novel approach to overcome this problem. Multi-DOF activations are artificially modelled from single-DOF ones using a simple linear combination of sEMG signals, which are then added to the training set. This procedure, which we named LET (Linearly Enhanced Training), provides an augmented data set to any machine-learning-based intent detection system. In two experiments involving intact subjects, one offline and one online, we trained a standard machine learning approach using the full data set containing single- and multi-DOF activations as well as using the LET-augmented data set in order to evaluate the performance of the LET procedure. The results indicate that the machine trained on the latter data set obtains worse results in the offline experiment compared to the full data set. However, the online implementation enables the user to perform multi-DOF tasks with almost the same precision as single-DOF tasks without the need of explicitly training multi-DOF activations. Moreover, the parameters involved in the system are statistically uniform across subjects. PMID:27606674

  2. An adaptive refinement procedure for transient thermal analysis using nodeless variable finite elements

    NASA Technical Reports Server (NTRS)

    Ramakrishnan, R.; Wieting, Allan R.; Thornton, Earl A.

    1990-01-01

    An adaptive mesh refinement procedure that uses nodeless variables and quadratic interpolation functions is presented for analyzing transient thermal problems. A temperature based finite element scheme with Crank-Nicolson time marching is used to obtain the thermal solution. The strategies used for mesh adaption, computing refinement indicators, and time marching are described. Examples in one and two dimensions are presented and comparisons are made with exact solutions. The effectiveness of this procedure for transient thermal analysis is reflected in good solution accuracy, reduction in number of elements used, and computational efficiency.

  3. Monte Carlo Analysis of Airport Throughput and Traffic Delays Using Self Separation Procedures

    NASA Technical Reports Server (NTRS)

    Consiglio, Maria C.; Sturdy, James L.

    2006-01-01

    This paper presents the results of three simulation studies of throughput and delay times of arrival and departure operations performed at non-towered, non-radar airports using self-separation procedures. The studies were conducted as part of the validation process of the Small Aircraft Transportation Systems Higher Volume Operations (SATS HVO) concept and include an analysis of the predicted airport capacity using with different traffic conditions and system constraints under increasing levels of demand. Results show that SATS HVO procedures can dramatically increase capacity at non-towered, non-radar airports and that the concept offers the potential for increasing capacity of the overall air transportation system.

  4. A rotating iterative procedure (RIP) for estimating hybrid constants in multi-compartment analysis on desk computers.

    PubMed

    von Hattingberg, H M; Brockmeier, D; Kreuter, G

    1977-01-01

    The rotating iterative procedure (RIP) is a programming concept for non-linear least squares fitting of multiexponential equations to experimental data in pharmacokinetics. The method is economical in its use of program and active register capacity and can be employed in modern electronic desk-top computers. The algorithms necessary for obtaining primary estimates of various logarithmic components and their subsequent correction are presented, with as little higher mathematics as appeared permissible. The procedure is described in the sequence that would actually be followed in a pharmacokinetic analysis, and an example is included, as well as a skeleton version of a program written in BASIC. Some instructions for obtaining overall statistical parameters are given.

  5. Intruder Activity Analysis under Unreliable Sensor Networks

    SciTech Connect

    Tae-Sic Yoo; Humberto E. Garcia

    2007-09-01

    This paper addresses the problem of counting intruder activities within a monitored domain by a sensor network. The deployed sensors are unreliable. We characterize imperfect sensors with misdetection and false-alarm probabilities. We model intruder activities with Markov Chains. A set of Hidden Markov Models (HMM) models the imperfect sensors and intruder activities to be monitored. A novel sequential change detection/isolation algorithm is developed to detect and isolate a change from an HMM representing no intruder activity to another HMM representing some intruder activities. Procedures for estimating the entry time and the trace of intruder activities are developed. A domain monitoring example is given to illustrate the presented concepts and computational procedures.

  6. 31 CFR 1026.520 - Special information sharing procedures to deter money laundering and terrorist activity for...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... to deter money laundering and terrorist activity for futures commission merchants and introducing... To Deter Money Laundering and Terrorist Activity § 1026.520 Special information sharing procedures to deter money laundering and terrorist activity for futures commission merchants and introducing...

  7. 31 CFR 1024.520 - Special information sharing procedures to deter money laundering and terrorist activity for...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... to deter money laundering and terrorist activity for mutual funds. 1024.520 Section 1024.520 Money... To Deter Money Laundering and Terrorist Activity § 1024.520 Special information sharing procedures to deter money laundering and terrorist activity for mutual funds. (a) Refer to § 1010.520 of this...

  8. 31 CFR 1024.520 - Special information sharing procedures to deter money laundering and terrorist activity for...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... to deter money laundering and terrorist activity for mutual funds. 1024.520 Section 1024.520 Money... To Deter Money Laundering and Terrorist Activity § 1024.520 Special information sharing procedures to deter money laundering and terrorist activity for mutual funds. (a) Refer to § 1010.520 of this...

  9. 31 CFR 1026.520 - Special information sharing procedures to deter money laundering and terrorist activity for...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... to deter money laundering and terrorist activity for futures commission merchants and introducing... To Deter Money Laundering and Terrorist Activity § 1026.520 Special information sharing procedures to deter money laundering and terrorist activity for futures commission merchants and introducing...

  10. 31 CFR 1026.520 - Special information sharing procedures to deter money laundering and terrorist activity for...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... to deter money laundering and terrorist activity for futures commission merchants and introducing... To Deter Money Laundering and Terrorist Activity § 1026.520 Special information sharing procedures to deter money laundering and terrorist activity for futures commission merchants and introducing...

  11. 31 CFR 1024.520 - Special information sharing procedures to deter money laundering and terrorist activity for...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... to deter money laundering and terrorist activity for mutual funds. 1024.520 Section 1024.520 Money... To Deter Money Laundering and Terrorist Activity § 1024.520 Special information sharing procedures to deter money laundering and terrorist activity for mutual funds. (a) Refer to § 1010.520 of this...

  12. 40 CFR 260.41 - Procedures for case-by-case regulation of hazardous waste recycling activities.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... of hazardous waste recycling activities. 260.41 Section 260.41 Protection of Environment... Rulemaking Petitions § 260.41 Procedures for case-by-case regulation of hazardous waste recycling activities... hazardous waste recycling activities described in § 261.6(a)(2)(iii) under the provisions of § 261.6 (b)...

  13. 40 CFR 260.41 - Procedures for case-by-case regulation of hazardous waste recycling activities.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... of hazardous waste recycling activities. 260.41 Section 260.41 Protection of Environment... Rulemaking Petitions § 260.41 Procedures for case-by-case regulation of hazardous waste recycling activities... hazardous waste recycling activities described in § 261.6(a)(2)(iii) under the provisions of § 261.6 (b)...

  14. 40 CFR 260.41 - Procedures for case-by-case regulation of hazardous waste recycling activities.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... of hazardous waste recycling activities. 260.41 Section 260.41 Protection of Environment... Rulemaking Petitions § 260.41 Procedures for case-by-case regulation of hazardous waste recycling activities... hazardous waste recycling activities described in § 261.6(a)(2)(iii) under the provisions of § 261.6 (b)...

  15. 40 CFR 260.41 - Procedures for case-by-case regulation of hazardous waste recycling activities.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... of hazardous waste recycling activities. 260.41 Section 260.41 Protection of Environment... Rulemaking Petitions § 260.41 Procedures for case-by-case regulation of hazardous waste recycling activities... hazardous waste recycling activities described in § 261.6(a)(2)(iii) under the provisions of § 261.6 (b)...

  16. 40 CFR 260.41 - Procedures for case-by-case regulation of hazardous waste recycling activities.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... of hazardous waste recycling activities. 260.41 Section 260.41 Protection of Environment... Rulemaking Petitions § 260.41 Procedures for case-by-case regulation of hazardous waste recycling activities... hazardous waste recycling activities described in § 261.6(a)(2)(iii) under the provisions of § 261.6 (b)...

  17. An independent refinement and integration procedure in multiregion finite element analysis

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, T.; Raju, I. S.

    1992-01-01

    An independent refinement and integration procedure is developed to couple together independently modeled (global and local) regions in a single analysis. The models can have different levels of refinement and along the interface between them the finite element nodes need not coincide with one another. In the local model all the nodes except the nodes at the interface are statically condensed and the reduced stiffness matrix is obtained. For this static condensation a modified frontal solution technique is employed. A spline interpolation function that satisfies the linear isotropic plate bending differential equation is used to relate the local model interface nodal displacements to the global model interface displacements. The proposed independent refinement and integration procedure is evaluated by applying it to two- and three-dimensional cases involving inplane and out-of-plane deformation. The procedure yielded very accurate results for all the examples studied.

  18. Test and analysis procedures for updating math models of Space Shuttle payloads

    NASA Technical Reports Server (NTRS)

    Craig, Roy R., Jr.

    1991-01-01

    Over the next decade or more, the Space Shuttle will continue to be the primary transportation system for delivering payloads to Earth orbit. Although a number of payloads have already been successfully carried by the Space Shuttle in the payload bay of the Orbiter vehicle, there continues to be a need for evaluation of the procedures used for verifying and updating the math models of the payloads. The verified payload math models is combined with an Orbiter math model for the coupled-loads analysis, which is required before any payload can fly. Several test procedures were employed for obtaining data for use in verifying payload math models and for carrying out the updating of the payload math models. Research was directed at the evaluation of test/update procedures for use in the verification of Space Shuttle payload math models. The following research tasks are summarized: (1) a study of free-interface test procedures; (2) a literature survey and evaluation of model update procedures; and (3) the design and construction of a laboratory payload simulator.

  19. Procedure for statistical analysis of one-parameter discrepant experimental data.

    PubMed

    Badikov, Sergey A; Chechev, Valery P

    2012-09-01

    A new, Mandel-Paule-type procedure for statistical processing of one-parameter discrepant experimental data is described. The procedure enables one to estimate a contribution of unrecognized experimental errors into the total experimental uncertainty as well as to include it in analysis. A definition of discrepant experimental data for an arbitrary number of measurements is introduced as an accompanying result. In the case of negligible unrecognized experimental errors, the procedure simply reduces to the calculation of the weighted average and its internal uncertainty. The procedure was applied to the statistical analysis of half-life experimental data; Mean half-lives for 20 actinides were calculated and results were compared to the ENSDF and DDEP evaluations. On the whole, the calculated half-lives are consistent with the ENSDF and DDEP evaluations. However, the uncertainties calculated in this work essentially exceed the ENSDF and DDEP evaluations for discrepant experimental data. This effect can be explained by adequately taking into account unrecognized experimental errors.

  20. Dynamic mechanical thermal analysis of maxillofacial prosthetic elastomers: the effect of different disinfecting aging procedures.

    PubMed

    Eleni, Panagiota N; Krokida, Magdalini K; Polyzois, Gregory L; Gettleman, Lawrence

    2014-05-01

    In this study, dynamic mechanical thermal analysis was used to evaluate the changes that occurred in maxillofacial elastomers subjected to different disinfecting regimens. A commercial polydimethyl siloxane (PDMS) and an experimental chlorinated polyethylene (CPE) were treated with different disinfection procedures for a period that simulates 1 year of clinical service: microwave exposure (D1), hypochlorite solution (D2), neutral soap (D3), and a commercial disinfecting solution (D4). A fifth group was kept in dark storage as control. Dynamic mechanical thermal analysis tests operated in a fixed frequency (1 Hz) over a range of temperatures (-130°C to 20°C for PDMS, -60°C to 120°C for CPE). Loss modulus (G″), storage modulus (G'), and loss factor (tanδ) were recorded as a function of temperature. The obtained glass transition temperature (Tg) values were subjected to statistical analysis. Dynamic mechanical thermal analysis revealed changes in Tg values for both materials, which reflect the possible changes in their chemical and physical structure, after different disinfection procedures. The PDMS and CPE samples seem to have less dense structure maybe because of chain scission reaction that probably occurred during the disinfection procedures. According to statistical analysis, Tg values presented significant changes from the control samples among the different materials and disinfecting procedures. Microwave exposure and hypochlorite solution affect CPE significantly, whereas PDMS exhibited significant changes after being treated with a commercial antimicrobial agent, concerning changes that occurred in Tg. In all cases, Tg values were decreased compared with the untreated samples, which were stiffer, presenting higher Tg and G' values. PMID:24799103

  1. [Costing nuclear medicine diagnostic procedures].

    PubMed

    Markou, Pavlos

    2005-01-01

    To the Editor: Referring to a recent special report about the cost analysis of twenty-nine nuclear medicine procedures, I would like to clarify some basic aspects for determining costs of nuclear medicine procedure with various costing methodologies. Activity Based Costing (ABC) method, is a new approach in imaging services costing that can provide the most accurate cost data, but is difficult to perform in nuclear medicine diagnostic procedures. That is because ABC requires determining and analyzing all direct and indirect costs of each procedure, according all its activities. Traditional costing methods, like those for estimating incomes and expenses per procedure or fixed and variable costs per procedure, which are widely used in break-even point analysis and the method of ratio-of-costs-to-charges per procedure may be easily performed in nuclear medicine departments, to evaluate the variability and differences between costs and reimbursement - charges. PMID:15886748

  2. Development of a thermal and structural analysis procedure for cooled radial turbines

    NASA Technical Reports Server (NTRS)

    Kumar, Ganesh N.; Deanna, Russell G.

    1988-01-01

    A procedure for computing the rotor temperature and stress distributions in a cooled radial turbine are considered. Existing codes for modeling the external mainstream flow and the internal cooling flow are used to compute boundary conditions for the heat transfer and stress analysis. The inviscid, quasi three dimensional code computes the external free stream velocity. The external velocity is then used in a boundary layer analysis to compute the external heat transfer coefficients. Coolant temperatures are computed by a viscous three dimensional internal flow cade for the momentum and energy equation. These boundary conditions are input to a three dimensional heat conduction code for the calculation of rotor temperatures. The rotor stress distribution may be determined for the given thermal, pressure and centrifugal loading. The procedure is applied to a cooled radial turbine which will be tested at the NASA Lewis Research Center. Representative results are given.

  3. The environmental analysis of helicopter operations by Federal agencies: Current procedures and research needs

    NASA Technical Reports Server (NTRS)

    Smith, C. C.; Warner, D. B.; Dajani, J. S.

    1977-01-01

    The technical, economic, and environmental problems restricting commercial helicopter passenger operations are reviewed. The key considerations for effective assessment procedures are outlined and a preliminary model for the environmental analysis of helicopters is developed. It is recommended that this model, or some similar approach, be used as a common base for the development of comprehensive environmental assessment methods for each of the federal agencies concerned with helicopters. A description of the critical environmental research issues applicable to helicopters is also presented.

  4. Analysis of procedures for sampling contaminated soil using Gy's Sampling Theory and Practice.

    PubMed

    Boudreault, Jean-Philippe; Dubé, Jean-Sébastien; Sona, Mirela; Hardy, Eric

    2012-05-15

    Soil sampling is a critical step in environmental site assessment studies. The representativeness of soil samples has a direct influence on financial, liability, environmental and public health issues associated with the outcome of remediation activities. Representativeness must be quantified for assessing and designing soil sampling procedures. Gy's Sampling Theory and Practice (STP) was used to analyze the reproducibility of two soil sampling procedures, namely a procedure based on grab sampling (GSP) and an alternative procedure (ASP) developed from STP principles. Sampling reproducibility, a component of sampling representativeness, was determined by theoretical calculations and experimental measurement of relative variances in trace metals concentrations at each stage of both sampling procedures. The ASP significantly increased the reproducibility of soil sampling compared to the GSP. Larger relative variances occurred during field sampling for the ASP and during laboratory sampling for the GSP. They were due to subsample mass reduction without control over particle size. Relative theoretical and experimental variances were in agreement. However, large discrepancies were observed for all sampling stages of both procedures between absolute theoretical and experimental relative variances. In the case of Pb, theoretical calculations were closer to experimental measurements when using a calculated value of the liberation factor (l) based on mineralogical data rather than l=1. It was shown that the b-exponent had a large influence on theoretical variances. Increasing the estimate of b from 0.5 to 1 largely improved the agreement between theory and experiment. Finally, 99% of experimental relative variance was explained by sampling errors compared to analytical errors.

  5. Neutron Activated Samarium-153 Microparticles for Transarterial Radioembolization of Liver Tumour with Post-Procedure Imaging Capabilities

    PubMed Central

    Hashikin, Nurul Ab. Aziz; Yeong, Chai-Hong; Abdullah, Basri Johan Jeet; Ng, Kwan-Hoong; Chung, Lip-Yong; Dahalan, Rehir; Perkins, Alan Christopher

    2015-01-01

    Introduction Samarium-153 (153Sm) styrene divinylbenzene microparticles were developed as a surrogate for Yttrium-90 (90Y) microspheres in liver radioembolization therapy. Unlike the pure beta emitter 90Y, 153Sm possess both therapeutic beta and diagnostic gamma radiations, making it possible for post-procedure imaging following therapy. Methods The microparticles were prepared using commercially available cation exchange resin, Amberlite IR-120 H+ (620–830 μm), which were reduced to 20–40 μm via ball mill grinding and sieve separation. The microparticles were labelled with 152Sm via ion exchange process with 152SmCl3, prior to neutron activation to produce radioactive 153Sm through 152Sm(n,γ)153Sm reaction. Therapeutic activity of 3 GBq was referred based on the recommended activity used in 90Y-microspheres therapy. The samples were irradiated in 1.494 x 1012 n.cm-2.s-1 neutron flux for 6 h to achieve the nominal activity of 3.1 GBq.g-1. Physicochemical characterisation of the microparticles, gamma spectrometry, and in vitro radiolabelling studies were carried out to study the performance and stability of the microparticles. Results Fourier Transform Infrared (FTIR) spectroscopy of the Amberlite IR-120 resins showed unaffected functional groups, following size reduction of the beads. However, as shown by the electron microscope, the microparticles were irregular in shape. The radioactivity achieved after 6 h neutron activation was 3.104 ± 0.029 GBq. The specific activity per microparticle was 53.855 ± 0.503 Bq. Gamma spectrometry and elemental analysis showed no radioactive impurities in the samples. Radiolabelling efficiencies of 153Sm-Amberlite in distilled water and blood plasma over 48 h were excellent and higher than 95%. Conclusion The laboratory work revealed that the 153Sm-Amberlite microparticles demonstrated superior characteristics for potential use in hepatic radioembolization. PMID:26382059

  6. Consequences of Decontamination Procedures in Forensic Hair Analysis Using Metal-Assisted Secondary Ion Mass Spectrometry Analysis.

    PubMed

    Cuypers, Eva; Flinders, Bryn; Boone, Carolien M; Bosman, Ingrid J; Lusthof, Klaas J; Van Asten, Arian C; Tytgat, Jan; Heeren, Ron M A

    2016-03-15

    Today, hair testing is considered to be the standard method for the detection of chronic drug abuse. Nevertheless, the differentiation between systemic exposure and external contamination remains a major challenge in the forensic interpretation of hair analysis. Nowadays, it is still impossible to directly show the difference between external contamination and use-related incorporation. Although the effects of washing procedures on the distribution of (incorporated) drugs in hair remain unknown, these decontamination procedures prior to hair analysis are considered to be indispensable in order to exclude external contamination. However, insights into the effect of decontamination protocols on levels and distribution of drugs incorporated in hair are essential to draw the correct forensic conclusions from hair analysis; we studied the consequences of these procedures on the spatial distribution of cocaine in hair using imaging mass spectrometry. Additionally, using metal-assisted secondary ion mass spectrometry, we are the first to directly show the difference between cocaine-contaminated and user hair without any prior washing procedure.

  7. Image analysis and data normalization procedures are crucial for microarray analyses.

    PubMed

    Kadanga, Ali Kpatcha; Leroux, Christine; Bonnet, Muriel; Chauvet, Stéphanie; Meunier, Bruno; Cassar-Malek, Isabelle; Hocquette, Jean-François

    2008-03-17

    This study was conducted with the aim of optimizing the experimental design of array experiments. We compared two image analysis and normalization procedures prior to data analysis using two experimental designs. For this, RNA samples from Charolais steers Longissimus thoracis muscle and subcutaneous adipose tissues were labeled and hybridized to a bovine 8,400 oligochip either in triplicate or in a dye-swap design. Image analysis and normalization were processed by either GenePix/MadScan or ImaGene/GeneSight. Statistical data analysis was then run using either the SAM method or a Student's t-test using a multiple test correction run on R 2.1 software. Our results show that image analysis and normalization procedure had an impact whereas the statistical methods much less influenced the outcome of differentially expressed genes. Image analysis and data normalization are thus an important aspect of microarray experiments, having a potentially significant impact on downstream analyses such as the identification of differentially expressed genes. This study provides indications on the choice of raw data preprocessing in microarray technology.

  8. Image Analysis and Data Normalization Procedures are Crucial for Microarray Analyses

    PubMed Central

    Kadanga, Ali Kpatcha; Leroux, Christine; Bonnet, Muriel; Chauvet, Stéphanie; Meunier, Bruno; Cassar-Malek, Isabelle; Hocquette, Jean-François

    2008-01-01

    This study was conducted with the aim of optimizing the experimental design of array experiments. We compared two image analysis and normalization procedures prior to data analysis using two experimental designs. For this, RNA samples from Charolais steers Longissimus thoracis muscle and subcutaneous adipose tissues were labeled and hybridized to a bovine 8,400 oligochip either in triplicate or in a dye-swap design. Image analysis and normalization were processed by either GenePix/MadScan or ImaGene/GeneSight. Statistical data analysis was then run using either the SAM method or a Student’s t-test using a multiple test correction run on R 2.1 software. Our results show that image analysis and normalization procedure had an impact whereas the statistical methods much less influenced the outcome of differentially expressed genes. Image analysis and data normalization are thus an important aspect of microarray experiments, having a potentially significant impact on downstream analyses such as the identification of differentially expressed genes. This study provides indications on the choice of raw data preprocessing in microarray technology. PMID:19787079

  9. Proposal of a procedure for the analysis of atmospheric polycyclic aromatic hydrocarbons in mosses.

    PubMed

    Concha-Graña, Estefanía; Piñeiro-Iglesias, María; Muniategui-Lorenzo, Soledad; López-Mahía, Purificación; Prada-Rodríguez, Darío

    2015-03-01

    A useful analytical procedure for the analysis of 19 polycyclic aromatic hydrocarbons (PAHs) in moss samples using microwave assisted extraction and programmed temperature vaporization-gas chromatography-tandem mass spectrometry (PTV-GC-MS/MS) determination is proposed. The state of art in PAHs analysis in mosses was reviewed. All the steps of the analysis were optimized regarding not only to the analytical parameters, but also the cost, the total time of analysis and the labour. The method was validated for one moss species used as moss monitor in ambient air, obtaining high recoveries (between 83-108%), low quantitation limits (lower than 2 ng g(-1)), good intermediate precision (relative standard deviation lower than 10%), uncertainties lower than 20%. Finally, the method was checked for other species, demonstrating its suitability for the analysis of different moss species. For this reason the proposed method can be helpful in air biomonitoring studies.

  10. Multi-function microsystem for cells migration analysis and evaluation of photodynamic therapy procedure in coculture

    PubMed Central

    Jastrzebska (Jedrych), Elzbieta; Grabowska-Jadach, Ilona; Chudy, Michal; Dybko, Artur; Brzozka, Zbigniew

    2012-01-01

    Cell migration is an important physiological process, which is involved in cancer metastasis. Therefore, the investigation of cell migration may lead to the development of novel therapeutic approaches. In this study, we have successfully developed a microsystem for culture of two cell types (non-malignant and carcinoma) and for analysis of cell migration dependence on distance between them. Finally, we studied quantitatively the influence of photodynamic therapy (PDT) procedures on the viability of pairs of non-malignant (MRC5 or Balb/3T3) and carcinoma (A549) cells coculture. The proposed geometry of the microsystem allowed for separate introduction of two cell lines and analysis of cells migration dependence on distance between the cells. We found that a length of connecting microchannel has an influence on cell migration and viability of non-malignant cells after PDT procedure. Summarizing, the developed microsystem can constitute a new tool for carrying out experiments, which offers a few functions: cell migration analysis, carcinoma and non-malignant cells coculture, and evaluation of PDT procedure in the various steps of cell migration. PMID:24339849

  11. Meta-analysis of the technical performance of an imaging procedure: guidelines and statistical methodology.

    PubMed

    Huang, Erich P; Wang, Xiao-Feng; Choudhury, Kingshuk Roy; McShane, Lisa M; Gönen, Mithat; Ye, Jingjing; Buckler, Andrew J; Kinahan, Paul E; Reeves, Anthony P; Jackson, Edward F; Guimaraes, Alexander R; Zahlmann, Gudrun

    2015-02-01

    Medical imaging serves many roles in patient care and the drug approval process, including assessing treatment response and guiding treatment decisions. These roles often involve a quantitative imaging biomarker, an objectively measured characteristic of the underlying anatomic structure or biochemical process derived from medical images. Before a quantitative imaging biomarker is accepted for use in such roles, the imaging procedure to acquire it must undergo evaluation of its technical performance, which entails assessment of performance metrics such as repeatability and reproducibility of the quantitative imaging biomarker. Ideally, this evaluation will involve quantitative summaries of results from multiple studies to overcome limitations due to the typically small sample sizes of technical performance studies and/or to include a broader range of clinical settings and patient populations. This paper is a review of meta-analysis procedures for such an evaluation, including identification of suitable studies, statistical methodology to evaluate and summarize the performance metrics, and complete and transparent reporting of the results. This review addresses challenges typical of meta-analyses of technical performance, particularly small study sizes, which often causes violations of assumptions underlying standard meta-analysis techniques. Alternative approaches to address these difficulties are also presented; simulation studies indicate that they outperform standard techniques when some studies are small. The meta-analysis procedures presented are also applied to actual [18F]-fluorodeoxyglucose positron emission tomography (FDG-PET) test-retest repeatability data for illustrative purposes. PMID:24872353

  12. Assessment of ecological risks at former landfill site using TRIAD procedure and multicriteria analysis.

    PubMed

    Sorvari, Jaana; Schultz, Eija; Haimi, Jari

    2013-02-01

    Old industrial landfills are important sources of environmental contamination in Europe, including Finland. In this study, we demonstrated the combination of TRIAD procedure, multicriteria decision analysis (MCDA), and statistical Monte Carlo analysis for assessing the risks to terrestrial biota in a former landfill site contaminated by petroleum hydrocarbons (PHCs) and metals. First, we generated hazard quotients by dividing the concentrations of metals and PHCs in soil by the corresponding risk-based ecological benchmarks. Then we conducted ecotoxicity tests using five plant species, earthworms, and potworms, and determined the abundance and diversity of soil invertebrates from additional samples. We aggregated the results in accordance to the methods used in the TRIAD procedure, conducted rating of the assessment methods based on their performance in terms of specific criteria, and weighted the criteria using two alternative weighting techniques to produce performance scores for each method. We faced problems in using the TRIAD procedure, for example, the results from the animal counts had to be excluded from the calculation of integrated risk estimates (IREs) because our reference soil sample showed the lowest biodiversity and abundance of soil animals. In addition, hormesis hampered the use of the results from the ecotoxicity tests. The final probabilistic IREs imply significant risks at all sampling locations. Although linking MCDA with TRIAD provided a useful means to study and consider the performance of the alternative methods in predicting ecological risks, some uncertainties involved still remained outside the quantitative analysis. PMID:22762796

  13. Meta-analysis of the technical performance of an imaging procedure: guidelines and statistical methodology.

    PubMed

    Huang, Erich P; Wang, Xiao-Feng; Choudhury, Kingshuk Roy; McShane, Lisa M; Gönen, Mithat; Ye, Jingjing; Buckler, Andrew J; Kinahan, Paul E; Reeves, Anthony P; Jackson, Edward F; Guimaraes, Alexander R; Zahlmann, Gudrun

    2015-02-01

    Medical imaging serves many roles in patient care and the drug approval process, including assessing treatment response and guiding treatment decisions. These roles often involve a quantitative imaging biomarker, an objectively measured characteristic of the underlying anatomic structure or biochemical process derived from medical images. Before a quantitative imaging biomarker is accepted for use in such roles, the imaging procedure to acquire it must undergo evaluation of its technical performance, which entails assessment of performance metrics such as repeatability and reproducibility of the quantitative imaging biomarker. Ideally, this evaluation will involve quantitative summaries of results from multiple studies to overcome limitations due to the typically small sample sizes of technical performance studies and/or to include a broader range of clinical settings and patient populations. This paper is a review of meta-analysis procedures for such an evaluation, including identification of suitable studies, statistical methodology to evaluate and summarize the performance metrics, and complete and transparent reporting of the results. This review addresses challenges typical of meta-analyses of technical performance, particularly small study sizes, which often causes violations of assumptions underlying standard meta-analysis techniques. Alternative approaches to address these difficulties are also presented; simulation studies indicate that they outperform standard techniques when some studies are small. The meta-analysis procedures presented are also applied to actual [18F]-fluorodeoxyglucose positron emission tomography (FDG-PET) test-retest repeatability data for illustrative purposes.

  14. Estimating the divergence point: a novel distributional analysis procedure for determining the onset of the influence of experimental variables.

    PubMed

    Reingold, Eyal M; Sheridan, Heather

    2014-01-01

    The divergence point analysis procedure is aimed at obtaining an estimate of the onset of the influence of an experimental variable on response latencies (e.g., fixation duration, reaction time). The procedure involves generating survival curves for two conditions, and using a bootstrapping technique to estimate the timing of the earliest discernible divergence between curves. In the present paper, several key extensions for this procedure were proposed and evaluated by conducting simulations and by reanalyzing data from previous studies. Our findings indicate that the modified versions of the procedure performed substantially better than the original procedure under conditions of low experimental power. Furthermore, unlike the original procedure, the modified procedures provided divergence point estimates for individual participants and permitted testing the significance of the difference between estimates across conditions. The advantages of the modified procedures are illustrated, the theoretical and methodological implications are discussed, and promising future directions are outlined.

  15. Human Factors Process Task Analysis Liquid Oxygen Pump Acceptance Test Procedure for the Advanced Technology Development Center

    NASA Technical Reports Server (NTRS)

    Diorio, Kimberly A.

    2002-01-01

    A process task analysis effort was undertaken by Dynacs Inc. commencing in June 2002 under contract from NASA YA-D6. Funding was provided through NASA's Ames Research Center (ARC), Code M/HQ, and Industrial Engineering and Safety (IES). The John F. Kennedy Space Center (KSC) Engineering Development Contract (EDC) Task Order was 5SMA768. The scope of the effort was to conduct a Human Factors Process Failure Modes and Effects Analysis (HF PFMEA) of a hazardous activity and provide recommendations to eliminate or reduce the effects of errors caused by human factors. The Liquid Oxygen (LOX) Pump Acceptance Test Procedure (ATP) was selected for this analysis. The HF PFMEA table (see appendix A) provides an analysis of six major categories evaluated for this study. These categories include Personnel Certification, Test Procedure Format, Test Procedure Safety Controls, Test Article Data, Instrumentation, and Voice Communication. For each specific requirement listed in appendix A, the following topics were addressed: Requirement, Potential Human Error, Performance-Shaping Factors, Potential Effects of the Error, Barriers and Controls, Risk Priority Numbers, and Recommended Actions. This report summarizes findings and gives recommendations as determined by the data contained in appendix A. It also includes a discussion of technology barriers and challenges to performing task analyses, as well as lessons learned. The HF PFMEA table in appendix A recommends the use of accepted and required safety criteria in order to reduce the risk of human error. The items with the highest risk priority numbers should receive the greatest amount of consideration. Implementation of the recommendations will result in a safer operation for all personnel.

  16. Ravitch versus Nuss procedure for pectus excavatum: systematic review and meta-analysis

    PubMed Central

    Kanagaratnam, Aran; Phan, Steven; Tchantchaleishvilli, Vakhtang

    2016-01-01

    Background Pectus excavatum is the most common congenital chest wall deformity. The two most common surgical techniques for its correction are the modified Ravitch technique and the minimally invasive Nuss technique. Despite both procedures being used widely, data comparing them are scarce. Methods We conducted a systematic review and meta-analysis of comparative studies to evaluate these procedures. A systematic search of the literature was performed from six electronic databases. Pooled meta-analysis was conducted using odds ratio (OR) and weighted mean difference (WMD). Results A total of 13 studies comprising 1,432 pediatric (79.3%) and adult (20.7%) patients were identified, including 912 patients undergoing the Nuss procedure compared to 520 patients undergoing the Ravitch procedure. There was no significant difference found between the Nuss group versus Ravitch group in pediatric patients with regard to overall complications (OR =1.16; 95% CI: 0.61–2.19; I2=56%; P=0.65), reoperations (6.1% vs. 6.4%; OR =1.00; 95% CI: 0.40–2.50; I2=0%; P=1.00), wound infections (OR =0.58; 95% CI: 0.23–1.46; I2=0%; P=0.25), hemothorax (1.6% vs. 1.3%; OR =0.74; 95% CI: 0.21–2.65; I2=12%; P=0.64), pneumothorax (3.4% vs. 1.5%; OR =1.11; 95% CI: 0.42–2.93; I2=0%; P=0.83) or pneumonia (OR =0.15; 95% CI: 0.02–1.48; I2=0%; P=0.10). Adult patients undergoing the Nuss procedure had a higher incidence of overall complications (OR =3.26; 95% CI: 1.01–10.46; I2=0%; P=0.05), though there were far fewer studies that reported data. Conclusions These results suggest no difference between the Nuss and Ravitch procedures for pediatric patients, while in adults the Ravitch procedure resulted in fewer complications. PMID:27747174

  17. The Impact of Active Consent Procedures on Nonresponse and Nonresponse Error in Youth Survey Data: Evidence from a New Experiment

    PubMed Central

    Courser, Matthew W.; Shamblen, Stephen R.; Lavrakas, Paul J.; Collins, David; Ditterline, Paul

    2009-01-01

    This paper reports results from a student survey fielded using an experimental design with 14 Kentucky school districts. Seven of the fourteen districts were randomly assigned to implement the survey with active consent procedures; the other seven districts implemented the survey with passive consent procedures. We utilized our experimental design to investigate the impact of consent procedures on (a) participation rates, (b) demographic characteristic of the survey samples, and (c) estimates of ATOD use. We found that the use of active consent procedures resulted in reduced response rates, under-representation of male students and older students, and lower lifetime and past 30 day prevalence rates for most drugs and for most antisocial behaviors. Methodological implications of these findings are discussed, along with directions for further research. PMID:19506295

  18. 45 CFR 100.6 - What procedures apply to the selection of programs and activities under these regulations?

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 45 Public Welfare 1 2010-10-01 2010-10-01 false What procedures apply to the selection of programs and activities under these regulations? 100.6 Section 100.6 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION INTERGOVERNMENTAL REVIEW OF DEPARTMENT OF HEALTH AND HUMAN SERVICES PROGRAMS AND ACTIVITIES § 100.6...

  19. 31 CFR 1021.520 - Special information sharing procedures to deter money laundering and terrorist activity for...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... to deter money laundering and terrorist activity for casinos and card clubs. 1021.520 Section 1021... ENFORCEMENT NETWORK, DEPARTMENT OF THE TREASURY RULES FOR CASINOS AND CARD CLUBS Special Information Sharing Procedures To Deter Money Laundering and Terrorist Activity for Casinos and Card Clubs § 1021.520...

  20. 31 CFR 1021.520 - Special information sharing procedures to deter money laundering and terrorist activity for...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... to deter money laundering and terrorist activity for casinos and card clubs. 1021.520 Section 1021... ENFORCEMENT NETWORK, DEPARTMENT OF THE TREASURY RULES FOR CASINOS AND CARD CLUBS Special Information Sharing Procedures To Deter Money Laundering and Terrorist Activity for Casinos and Card Clubs § 1021.520...

  1. 31 CFR 1021.520 - Special information sharing procedures to deter money laundering and terrorist activity for...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... to deter money laundering and terrorist activity for casinos and card clubs. 1021.520 Section 1021... ENFORCEMENT NETWORK, DEPARTMENT OF THE TREASURY RULES FOR CASINOS AND CARD CLUBS Special Information Sharing Procedures To Deter Money Laundering and Terrorist Activity for Casinos and Card Clubs § 1021.520...

  2. 31 CFR 1021.520 - Special information sharing procedures to deter money laundering and terrorist activity for...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... to deter money laundering and terrorist activity for casinos and card clubs. 1021.520 Section 1021... ENFORCEMENT NETWORK, DEPARTMENT OF THE TREASURY RULES FOR CASINOS AND CARD CLUBS Special Information Sharing Procedures To Deter Money Laundering and Terrorist Activity for Casinos and Card Clubs § 1021.520...

  3. Strategy for choosing extraction procedures for NMR-based metabolomic analysis of mammalian cells.

    PubMed

    Martineau, Estelle; Tea, Illa; Loaëc, Gregory; Giraudeau, Patrick; Akoka, Serge

    2011-10-01

    Metabolomic analysis of mammalian cells can be applied across multiple fields including medicine and toxicology. It requires the acquisition of reproducible, robust, reliable, and homogeneous biological data sets. Particular attention must be paid to the efficiency and reliability of the extraction procedure. Even though a number of recent studies have dealt with optimizing a particular protocol for specific matrices and analytical techniques, there is no universal method to allow the detection of the entire cellular metabolome. Here, we present a strategy for choosing extraction procedures from adherent mammalian cells for the global NMR analysis of the metabolome. After the quenching of cells, intracellular metabolites are extracted from the cells using one of the following solvent systems of varying polarities: perchloric acid, acetonitrile/water, methanol, methanol/water, and methanol/chloroform/water. The hydrophilic metabolite profiles are analysed using (1)H nuclear magnetic resonance (NMR) spectroscopy. We propose an original geometric representation of metabolites reflecting the efficiency of extraction methods. In the case of NMR-based analysis of mammalian cells, this methodology demonstrates that a higher portion of intracellular metabolites are extracted by using methanol or methanol/chloroform/water. The preferred method is evaluated in terms of biological variability for studying metabolic changes caused by the phenotype of four different human breast cancer cell lines, showing that the selected extraction procedure is a promising tool for metabolomic and metabonomic studies of mammalian cells. The strategy proposed in this paper to compare extraction procedures is applicable to NMR-based metabolomic studies of various systems.

  4. Extending simulation modeling to activity-based costing for clinical procedures.

    PubMed

    Glick, N D; Blackmore, C C; Zelman, W N

    2000-04-01

    A simulation model was developed to measure costs in an Emergency Department setting for patients presenting with possible cervical-spine injury who needed radiological imaging. Simulation, a tool widely used to account for process variability but typically focused on utilization and throughput analysis, is being introduced here as a realistic means to perform an activity-based-costing (ABC) analysis, because traditional ABC methods have difficulty coping with process variation in healthcare. Though the study model has a very specific application, it can be generalized to other settings simply by changing the input parameters. In essence, simulation was found to be an accurate and viable means to conduct an ABC analysis; in fact, the output provides more complete information than could be achieved through other conventional analyses, which gives management more leverage with which to negotiate contractual reimbursements. PMID:10895422

  5. Extending simulation modeling to activity-based costing for clinical procedures.

    PubMed

    Glick, N D; Blackmore, C C; Zelman, W N

    2000-04-01

    A simulation model was developed to measure costs in an Emergency Department setting for patients presenting with possible cervical-spine injury who needed radiological imaging. Simulation, a tool widely used to account for process variability but typically focused on utilization and throughput analysis, is being introduced here as a realistic means to perform an activity-based-costing (ABC) analysis, because traditional ABC methods have difficulty coping with process variation in healthcare. Though the study model has a very specific application, it can be generalized to other settings simply by changing the input parameters. In essence, simulation was found to be an accurate and viable means to conduct an ABC analysis; in fact, the output provides more complete information than could be achieved through other conventional analyses, which gives management more leverage with which to negotiate contractual reimbursements.

  6. Minimum-mass design of filamentary composite panels under combined loads: Design procedure based on a rigorous buckling analysis

    NASA Technical Reports Server (NTRS)

    Stroud, W. J.; Agranoff, N.; Anderson, M. S.

    1977-01-01

    A procedure is presented for designing uniaxially stiffened panels made of composite material and subjected to combined inplane loads. The procedure uses a rigorous buckling analysis and nonlinear mathematical programing techniques. Design studies carried out with the procedure consider hat-stiffened and corrugated panels made of graphite-epoxy material. Combined longitudinal compression and shear and combined longitudinal and transverse compression are the loadings used in the studies. The capability to tailor the buckling response of a panel is also explored. Finally, the adequacy of another, simpler, analysis-design procedure is examined.

  7. USING VIDEO TECHNOLOGY TO DISSEMINATE BEHAVIORAL PROCEDURES: A REVIEW OF FUNCTIONAL ANALYSIS: A GUIDE FOR UNDERSTANDING CHALLENGING BEHAVIOR (DVD)

    PubMed Central

    Carr, James E; Fox, Eric J

    2009-01-01

    Although applied behavior analysis has generated many highly effective behavior-change procedures, the procedures have not always been effectively disseminated. One solution to this problem is the use of video technology, which has been facilitated by the ready availability of video production equipment and software and multiple distribution methods (e.g., DVD, online streaming). We review a recent DVD that was produced to disseminate the successful experimental functional analysis procedure. The review is followed by general recommendations for disseminating behavior-analytic procedures via video technology. PMID:20514204

  8. A Procedure for Modeling Structural Component/Attachment Failure Using Transient Finite Element Analysis

    NASA Technical Reports Server (NTRS)

    Lovejoy, Andrew E.; Jegley, Dawn C. (Technical Monitor)

    2007-01-01

    Structures often comprise smaller substructures that are connected to each other or attached to the ground by a set of finite connections. Under static loading one or more of these connections may exceed allowable limits and be deemed to fail. Of particular interest is the structural response when a connection is severed (failed) while the structure is under static load. A transient failure analysis procedure was developed by which it is possible to examine the dynamic effects that result from introducing a discrete failure while a structure is under static load. The failure is introduced by replacing a connection load history by a time-dependent load set that removes the connection load at the time of failure. The subsequent transient response is examined to determine the importance of the dynamic effects by comparing the structural response with the appropriate allowables. Additionally, this procedure utilizes a standard finite element transient analysis that is readily available in most commercial software, permitting the study of dynamic failures without the need to purchase software specifically for this purpose. The procedure is developed and explained, demonstrated on a simple cantilever box example, and finally demonstrated on a real-world example, the American Airlines Flight 587 (AA587) vertical tail plane (VTP).

  9. Development of SRC-I product analysis. Volume 3. Documentation of procedures

    SciTech Connect

    Schweighardt, F.K.; Kingsley, I.S.; Cooper, F.E.; Kamzelski, A.Z.; Parees, D.M.

    1983-09-01

    This section documents the BASIC computer program written to simulate Wilsonville's GC-simulated distillation (GCSD) results at APCI-CRSD Trexlertown. The GC conditions used at APCI for the Wilsonville GCSD analysis of coal-derived liquid samples were described in the SRC-I Quarterly Technical Report, April-June 1981. The approach used to simulate the Wilsonville GCSD results is also from an SRC-I Quarterly Technical Report and is reproduced in Appendix VII-A. The BASIC computer program is described in the attached Appendix VII-B. Analysis of gases produced during coal liquefaction generates key information needed to determine product yields for material balance and process control. Gas samples from the coal process development unit (CPDU) and tubing bombs are the primary samples analyzed. A Carle gas chromatographic system was used to analyze coal liquefaction gas samples. A BASIC computer program was written to calculate the gas chromatographic peak area results into mole percent results. ICRC has employed several analytical workup procedures to determine the amount of distillate, oils, asphaltenes, preasphaltenes, and residue in SRC-I process streams. The ASE procedure was developed using Conoco's liquid column fractionation (LC/F) method as a model. In developing the ASE procedure, ICRC was able to eliminate distillation, and therefore quantify the oils fraction in one extraction step. ASE results were shown to be reproducible within +- 2 wt %, and to yield acceptable material balances. Finally, the ASE method proved to be the least affected by sample composition.

  10. Instability analysis procedure for 3-level multi-bearing rotor-foundation systems

    NASA Technical Reports Server (NTRS)

    Zhou, S.; Rieger, N. F.

    1985-01-01

    A procedure for the instability analysis of a three-level multispan rotor systems is described. This procedure is based on a distributed mass elastic representation of the rotor system in several eight-coefficient bearings. Each bearing is supported from an elastic foundation on damped, elastic pedestals. The foundation is represented as a general distributed mass elastic structure on discrete supports, which may have different stiffness and damping properties in the horizontal and vertical directions. This system model is suited to studies of instability threshold conditions for multirotor turbomachines on either massive or flexible foundations. The instability conditions is found by obtaining the eigenvalues of the system determinant, which is obtained by the transfer matrix method from the three-level system model. The stability determinant is solved for the lowest rotational speed at which the system damping becomes zero in the complex eigenvalue, and for the whirl frequency corresponding to the natural frequency of the unstable mode. An efficient algorithm for achieving this is described. Application of this procedure to a rigid rotor in two damped-elastic bearings and flexible supports is described. A second example discusses a flexible rotor with four damped-elastic bearings. The third case compares the stability of a six-bearing 300 Mw turbine generator unit, using two different bearing types. These applications validate the computer program and various aspects of the analysis.

  11. Activation Likelihood Estimation meta-analysis revisited

    PubMed Central

    Eickhoff, Simon B.; Bzdok, Danilo; Laird, Angela R.; Kurth, Florian; Fox, Peter T.

    2011-01-01

    A widely used technique for coordinate-based meta-analysis of neuroimaging data is activation likelihood estimation (ALE), which determines the convergence of foci reported from different experiments. ALE analysis involves modelling these foci as probability distributions whose width is based on empirical estimates of the spatial uncertainty due to the between-subject and between-template variability of neuroimaging data. ALE results are assessed against a null-distribution of random spatial association between experiments, resulting in random-effects inference. In the present revision of this algorithm, we address two remaining drawbacks of the previous algorithm. First, the assessment of spatial association between experiments was based on a highly time-consuming permutation test, which nevertheless entailed the danger of underestimating the right tail of the null-distribution. In this report, we outline how this previous approach may be replaced by a faster and more precise analytical method. Second, the previously applied correction procedure, i.e. controlling the false discovery rate (FDR), is supplemented by new approaches for correcting the family-wise error rate and the cluster-level significance. The different alternatives for drawing inference on meta-analytic results are evaluated on an exemplary dataset on face perception as well as discussed with respect to their methodological limitations and advantages. In summary, we thus replaced the previous permutation algorithm with a faster and more rigorous analytical solution for the null-distribution and comprehensively address the issue of multiple-comparison corrections. The proposed revision of the ALE-algorithm should provide an improved tool for conducting coordinate-based meta-analyses on functional imaging data. PMID:21963913

  12. Development of a thermal and structural analysis procedure for cooled radial turbines

    NASA Astrophysics Data System (ADS)

    Kumar, Ganesh N.; Deanna, Russell G.

    1988-06-01

    A procedure for computing the rotor temperature and stress distributions in a cooled radial turbine is considered. Existing codes for modeling the external mainstream flow and the internal cooling flow are used to compute boundary conditions for the heat transfer and stress analyses. An inviscid, quasi three-dimensional code computes the external free stream velocity. The external velocity is then used in a boundary layer analysis to compute the external heat transfer coefficients. Coolant temperatures are computed by a viscous one-dimensional internal flow code for the momentum and energy equation. These boundary conditions are input to a three-dimensional heat conduction code for calculation of rotor temperatures. The rotor stress distribution may be determined for the given thermal, pressure and centrifugal loading. The procedure is applied to a cooled radial turbine which will be tested at the NASA Lewis Research Center. Representative results from this case are included.

  13. Development of a thermal and structural analysis procedure for cooled radial turbines

    NASA Technical Reports Server (NTRS)

    Kumar, Ganesh N.; Deanna, Russell G.

    1988-01-01

    A procedure for computing the rotor temperature and stress distributions in a cooled radial turbine is considered. Existing codes for modeling the external mainstream flow and the internal cooling flow are used to compute boundary conditions for the heat transfer and stress analyses. An inviscid, quasi three-dimensional code computes the external free stream velocity. The external velocity is then used in a boundary layer analysis to compute the external heat transfer coefficients. Coolant temperatures are computed by a viscous one-dimensional internal flow code for the momentum and energy equation. These boundary conditions are input to a three-dimensional heat conduction code for calculation of rotor temperatures. The rotor stress distribution may be determined for the given thermal, pressure and centrifugal loading. The procedure is applied to a cooled radial turbine which will be tested at the NASA Lewis Research Center. Representative results from this case are included.

  14. Local strain redistribution corrections for a simplified inelastic analysis procedure based on an elastic finite-element analysis

    NASA Technical Reports Server (NTRS)

    Kaufman, A.; Hwang, S. Y.

    1985-01-01

    Strain redistribution corrections were developed for a simplified inelastic analysis procedure to economically calculate material cyclic response at the critical location of a structure for life prediction proposes. The method was based on the assumption that the plastic region in the structure is local and the total strain history required for input can be defined from elastic finite-element analyses. Cyclic stress-strain behavior was represented by a bilinear kinematic hardening model. The simplified procedure predicts stress-strain response with reasonable accuracy for thermally cycled problems but needs improvement for mechanically load-cycled problems. Neuber-type corrections were derived and incorporated in the simplified procedure to account for local total strain redistribution under cyclic mechanical loading. The corrected simplified method was used on a mechanically load-cycled benchmark notched-plate problem. The predicted material response agrees well with the nonlinear finite-element solutions for the problem. The simplified analysis computer program was 0.3% of the central processor unit time required for a nonlinear finite-element analysis.

  15. Total body nitrogen analysis. [neutron activation analysis

    NASA Technical Reports Server (NTRS)

    Palmer, H. E.

    1975-01-01

    Studies of two potential in vivo neutron activation methods for determining total and partial body nitrogen in animals and humans are described. A method using the CO-11 in the expired air as a measure of nitrogen content was found to be adequate for small animals such as rats, but inadequate for human measurements due to a slow excretion rate. Studies on the method of measuring the induced N-13 in the body show that with further development, this method should be adequate for measuring muscle mass changes occurring in animals or humans during space flight.

  16. Processes and Procedures for Application of CFD to Nuclear Reactor Safety Analysis

    SciTech Connect

    Richard W. Johnson; Richard R. Schultz; Patrick J. Roache; Ismail B. Celik; William D. Pointer; Yassin A. Hassan

    2006-09-01

    Traditionally, nuclear reactor safety analysis has been performed using systems analysis codes such as RELAP5, which was developed at the INL. However, goals established by the Generation IV program, especially the desire to increase efficiency, has lead to an increase in operating temperatures for the reactors. This increase pushes reactor materials to operate towards their upper temperature limits relative to structural integrity. Because there will be some finite variation of the power density in the reactor core, there will be a potential for local hot spots to occur in the reactor vessel. Hence, it has become apparent that detailed analysis will be required to ensure that local ‘hot spots’ do not exceed safety limits. It is generally accepted that computational fluid dynamics (CFD) codes are intrinsically capable of simulating fluid dynamics and heat transport locally because they are based on ‘first principles.’ Indeed, CFD analysis has reached a fairly mature level of development, including the commercial level. However, CFD experts are aware that even though commercial codes are capable of simulating local fluid and thermal physics, great care must be taken in their application to avoid errors caused by such things as inappropriate grid meshing, low-order discretization schemes, lack of iterative convergence and inaccurate time-stepping. Just as important is the choice of a turbulence model for turbulent flow simulation. Turbulence models model the effects of turbulent transport of mass, momentum and energy, but are not necessarily applicable for wide ranges of flow types. Therefore, there is a well-recognized need to establish practices and procedures for the proper application of CFD to simulate flow physics accurately and establish the level of uncertainty of such computations. The present document represents contributions of CFD experts on what the basic practices, procedures and guidelines should be to aid CFD analysts to obtain accurate

  17. Energy Consumption Analysis Procedure for Robotic Applications in different task motion

    NASA Astrophysics Data System (ADS)

    Ahmed, Iman; Aris, Ishak b.; Hamiruce Marhaban, Mohammad; Juraiza Ishak, Asnor

    2015-11-01

    This work proposes energy analysis method for humanoid robot, seen from simple motion task to complex one in energy chain. The research developed a procedure suitable for analysis, saving and modelling of energy consumption not only in this type of robot but also in most robots that based on electrical power as an energy source. This method has validated by an accurate integration using Matlab software for the power consumption curve to calculate the energy of individual and multiple servo motors. Therefore, this study can be considered as a procedure for energy analysis by utilizing the laboratory instruments capabilities to measure the energy parameters. We performed a various task motions with different angular speed to find out the speed limits in terms of robot stability and control strategy. A battery capacity investigation have been searched for several types of batteries to extract the power modelling equation and energy density parameter for each battery type, Matlab software have been built to design the algorithm and to evaluate experimental amount of the energy which is represented by area under the curve of the power curves. This will provide a robust estimation for the required energy in different task motions to be considered in energy saving (i.e., motion planning and real time scheduling).

  18. A procedure for the supercritical fluid extraction of coal samples, with subsequent analysis of extracted hydrocarbons

    SciTech Connect

    Jonathan J. Kolak

    2006-07-01

    This report provides a detailed, step-by-step procedure for conducting extractions with supercritical carbon dioxide (CO{sub 2}) using the ISCO SFX220 supercritical fluid extraction system. Protocols for the subsequent separation and analysis of extracted hydrocarbons are also included in this report. These procedures were developed under the auspices of the project 'Assessment of geologic reservoirs for carbon dioxide sequestration', to investigate possible environmental ramifications associated with CO{sub 2} storage (sequestration) in geologic reservoirs, such as deep coal beds. Supercritical CO{sub 2} has been used previously to extract contaminants from geologic matrices. Pressure-temperature conditions within deep coal beds may render CO{sub 2} supercritical. In this context, the ability of supercritical CO{sub 2} to extract contaminants from geologic materials may serve to mobilize noxious compounds from coal, possibly complicating storage efforts. There currently exists little information on the physicochemical interactions between supercritical CO{sub 2} and coal in this setting. The procedures described were developed to improve the understanding of these interactions and provide insight into the fate of CO{sub 2} and contaminants during simulated CO{sub 2} injections. 4 figs., 3 tabs., 1 app.

  19. A Procedure for the supercritical fluid extraction of coal samples, with subsequent analysis of extracted hydrocarbons

    USGS Publications Warehouse

    Kolak, Jonathan J.

    2006-01-01

    Introduction: This report provides a detailed, step-by-step procedure for conducting extractions with supercritical carbon dioxide (CO2) using the ISCO SFX220 supercritical fluid extraction system. Protocols for the subsequent separation and analysis of extracted hydrocarbons are also included in this report. These procedures were developed under the auspices of the project 'Assessment of Geologic Reservoirs for Carbon Dioxide Sequestration' (see http://pubs.usgs.gov/fs/fs026-03/fs026-03.pdf) to investigate possible environmental ramifications associated with CO2 storage (sequestration) in geologic reservoirs, such as deep (~1 km below land surface) coal beds. Supercritical CO2 has been used previously to extract contaminants from geologic matrices. Pressure-temperature conditions within deep coal beds may render CO2 supercritical. In this context, the ability of supercritical CO2 to extract contaminants from geologic materials may serve to mobilize noxious compounds from coal, possibly complicating storage efforts. There currently exists little information on the physicochemical interactions between supercritical CO2 and coal in this setting. The procedures described herein were developed to improve the understanding of these interactions and provide insight into the fate of CO2 and contaminants during simulated CO2 injections.

  20. Automatic Method of Supernovae Classification by Modeling Human Procedure of Spectrum Analysis

    NASA Astrophysics Data System (ADS)

    Módolo, Marcelo; Rosa, Reinaldo; Guimaraes, Lamartine N. F.

    2016-07-01

    The classification of a recently discovered supernova must be done as quickly as possible in order to define what information will be captured and analyzed in the following days. This classification is not trivial and only a few experts astronomers are able to perform it. This paper proposes an automatic method that models the human procedure of classification. It uses Multilayer Perceptron Neural Networks to analyze the supernovae spectra. Experiments were performed using different pre-processing and multiple neural network configurations to identify the classic types of supernovae. Significant results were obtained indicating the viability of using this method in places that have no specialist or that require an automatic analysis.

  1. Review of data analysis procedures for the ATS-6 millimeter wave experiment

    NASA Technical Reports Server (NTRS)

    Meneghini, R.

    1975-01-01

    Predictions of satellite downlink attenuation through the use of ground based measurements form a substantial part of the ATS-6 millimeter wave experiment (MWE). At the downlink frequencies (20 and 30 GHz), the major causes of attenuation are the density and the size distribution of rain drops along the propagation path. Ground station data, which include radar and rain gauge records, measure quantities related to the meteorological parameters of interest and thereby provide a prediction of downlink attenuation with which the measured attenuation can be compared. The calibration and data analysis procedures used in the MWE are reviewed with the object of improving the accuracy of such ground based predictions.

  2. Procedures for the Analysis of Band-recovery Data and User Instructions for Program MULT

    USGS Publications Warehouse

    Conroy, M.J.; Hines, J.E.; Williams, B.K.

    1989-01-01

    We briefly review methods for inference from band-recovery data and introduce a new, flexible procedure (MULT) for analysis of data from bird-banding studies. We compare our computing method to program SURIV and discuss the relative advanatages of each. We present several basic model structures that can be analyzed using program MULT and for each model structure describe estimation and hypothesis testing and give a data example. We provide a complete description of porgram MULT, which is IBM-PC compatible and may be run as either an interactive or a batch-mode program.

  3. Diagnosing Crohn's disease: an economic analysis comparing wireless capsule endoscopy with traditional diagnostic procedures.

    PubMed

    Goldfarb, Neil I; Pizzi, Laura T; Fuhr, Joseph P; Salvador, Christopher; Sikirica, Vanja; Kornbluth, Asher; Lewis, Blair

    2004-01-01

    work-up for Crohn's. Sensitivity analysis varying diagnostic yields of colonoscopy and SBFT vs. WCE demonstrates that WCE is still less costly than SBFT and colonoscopy even at their highest reported yields, as long as the diagnostic yield of WCE is 64.10% or better. Employing WCE as a first-line diagnostic procedure appears to be less costly, from a payor perspective, than current common procedures for diagnosing suspected Crohn's disease in the small bowel. Although not addressed in this model, earlier diagnosis with WCE (due to higher diagnostic yield) also could lead to earlier management, improved quality of life and workplace productivity for people with Crohn's disease.

  4. Analysis of active renin heterogeneity.

    PubMed

    Katz, S A; Malvin, R L; Lee, J; Kim, S H; Murray, R D; Opsahl, J A; Abraham, P A

    1991-09-01

    Active renin is a heterogeneous enzyme that can be separated into multiple forms with high-resolution isoelectric focusing. The isoelectric heterogeneity may result from differences in glycosylation between the different forms. In order to determine the relationship between active renin heterogeneity and differences in composition or attachment of oligosaccharides, two separate experiments were performed: (i) Tunicamycin, which interferes with normal glycosylation processing, increased the proportion of relatively basic renin forms secreted into the incubation media by rat renal cortical slices. (ii) Endoglycosidase F, which enzymatically removes carbohydrate from some classes of glycoprotein, similarly increased the proportion of relatively basic forms when incubated with active human recombinant renin. In addition, further studies with inhibitors of human renin activity revealed that the heterogeneous renin forms were similarly inhibited by two separate renin inhibitors. These results are consistent with the hypothesis that renin isoelectric heterogeneity is due in part to differences in carbohydrate moiety attachment and that the heterogeneity of renin does not influence access of direct renin inhibitors to the active site of renin.

  5. Analysis of active renin heterogeneity.

    PubMed

    Katz, S A; Malvin, R L; Lee, J; Kim, S H; Murray, R D; Opsahl, J A; Abraham, P A

    1991-09-01

    Active renin is a heterogeneous enzyme that can be separated into multiple forms with high-resolution isoelectric focusing. The isoelectric heterogeneity may result from differences in glycosylation between the different forms. In order to determine the relationship between active renin heterogeneity and differences in composition or attachment of oligosaccharides, two separate experiments were performed: (i) Tunicamycin, which interferes with normal glycosylation processing, increased the proportion of relatively basic renin forms secreted into the incubation media by rat renal cortical slices. (ii) Endoglycosidase F, which enzymatically removes carbohydrate from some classes of glycoprotein, similarly increased the proportion of relatively basic forms when incubated with active human recombinant renin. In addition, further studies with inhibitors of human renin activity revealed that the heterogeneous renin forms were similarly inhibited by two separate renin inhibitors. These results are consistent with the hypothesis that renin isoelectric heterogeneity is due in part to differences in carbohydrate moiety attachment and that the heterogeneity of renin does not influence access of direct renin inhibitors to the active site of renin. PMID:1908097

  6. Neutron activation analysis of a penny

    NASA Astrophysics Data System (ADS)

    Stevens, Richard E.

    2000-04-01

    Neutron activation analysis has been used for many years as an analysis tool and as an educational tool to teach students about nuclear properties. This article presents an exercise in the neutron activation analysis of a penny which, due to the simplicity of the resulting gamma-ray spectra, is appropriate for general physics classes. Students express a great deal of interest both in seeing the reactor in use as well as determining the composition of something that is familiar to them.

  7. A limited assessment of the ASEP human reliability analysis procedure using simulator examination results

    SciTech Connect

    Gore, B.R.; Dukelow, J.S. Jr.; Mitts, T.M.; Nicholson, W.L.

    1995-10-01

    This report presents a limited assessment of the conservatism of the Accident Sequence Evaluation Program (ASEP) human reliability analysis (HRA) procedure described in NUREG/CR-4772. In particular, the, ASEP post-accident, post-diagnosis, nominal HRA procedure is assessed within the context of an individual`s performance of critical tasks on the simulator portion of requalification examinations administered to nuclear power plant operators. An assessment of the degree to which operator perforn:Lance during simulator examinations is an accurate reflection of operator performance during actual accident conditions was outside the scope of work for this project; therefore, no direct inference can be made from this report about such performance. The data for this study are derived from simulator examination reports from the NRC requalification examination cycle. A total of 4071 critical tasks were identified, of which 45 had been failed. The ASEP procedure was used to estimate human error probability (HEP) values for critical tasks, and the HEP results were compared with the failure rates observed in the examinations. The ASEP procedure was applied by PNL operator license examiners who supplemented the limited information in the examination reports with expert judgment based upon their extensive simulator examination experience. ASEP analyses were performed for a sample of 162 critical tasks selected randomly from the 4071, and the results were used to characterize the entire population. ASEP analyses were also performed for all of the 45 failed critical tasks. Two tests were performed to assess the bias of the ASEP HEPs compared with the data from the requalification examinations. The first compared the average of the ASEP HEP values with the fraction of the population actually failed and it found a statistically significant factor of two bias on the average.

  8. A tiered procedure for assessing the formation of biotransformation products of pharmaceuticals and biocides during activated sludge treatment.

    PubMed

    Kern, Susanne; Baumgartner, Rebekka; Helbling, Damian E; Hollender, Juliane; Singer, Heinz; Loos, Martin J; Schwarzenbach, René P; Fenner, Kathrin

    2010-11-01

    Upon partial degradation of polar organic micropollutants during activated sludge treatment, transformation products (TPs) may be formed that enter the aquatic environment in the treated effluent. However, TPs are rarely considered in prospective environmental risk assessments of wastewater-relevant compound classes such as pharmaceuticals and biocides. Here, we suggest and evaluate a tiered procedure, which includes a fast initial screening step based on high resolution tandem mass spectrometry (HR-MS/MS) and a subsequent confirmatory quantitative analysis, that should facilitate consideration of TPs formed during activated sludge treatment in the exposure assessment of micropollutants. At the first tier, potential biotransformation product structures of seven pharmaceuticals (atenolol, bezafibrate, ketoprofen, metoprolol, ranitidine, valsartan, and venlafaxine) and one biocide (carbendazim) were assembled using computer-based biotransformation pathway prediction and known human metabolites. These target structures were screened for in sludge-seeded batch reactors using HR-MS/MS. The 12 TPs found to form in the batch experiments were then searched for in the effluents of two full-scale, municipal wastewater treatment plants (WWTPs) to confirm the environmental representativeness of this first tier. At the second tier, experiments with the same sludge-seeded batch reactors were carried out to acquire kinetic data for major TPs that were then used as input parameters into a cascaded steady-state completely-stirred tank reactor (CSTR) model for predicting TP effluent concentrations. Predicted effluent concentrations of four parent compounds and their three major TPs were corroborated by comparison to 3-day average influent and secondary effluent mass flows from one municipal WWTP. CSTR model-predicted secondary effluent mass flows agreed within a factor of two with measured mass flows and confidence intervals of predicted and measured mass flows overlapped in all

  9. LACBWR primary shield activation analysis

    SciTech Connect

    Nelson, L.L.; Lahti, G.P.; Johnson, W.J.

    1996-11-01

    Nuclear power plants in the US are required to estimate the costs of decommissioning to ensure that adequate funds are accumulated during the useful life of the plant. A major component of the decommissioning cost is the disposal of radioactive material, including material near the reactor created by neutron activation. An accurate assessment of the residual radioactivity in the reactor`s primary shield is necessary to determine this portion of the decommissioning demolition and disposal cost. This paper describes the efforts used to determine the activation levels remaining in the primary shield of the LaCrosse boiling water reactor (LACBWR), owned and operated by Dairyland Power Cooperative.

  10. Conference on Instrumental Activation Analysis: IAA 89

    NASA Astrophysics Data System (ADS)

    Vobecky, M.; Obrusnik, I.

    1989-05-01

    The proceedings contain 40 abstracts of papers all of which have been incorporated in INIS. The papers were centred on the applications of radioanalytical methods, especially on neutron activation analysis, x ray fluorescence analysis, PIXE analysis and tracer techniques in biology, medicine and metallurgy, measuring instruments including microcomputers, and data processing methods.

  11. Strategies for selecting optimal sampling and work-up procedures for analysing alkylphenol polyethoxylates in effluents from non-activated sludge biofilm reactors.

    PubMed

    Stenholm, Ake; Holmström, Sara; Hjärthag, Sandra; Lind, Ola

    2012-01-01

    Trace-level analysis of alkylphenol polyethoxylates (APEOs) in wastewater containing sludge requires the prior removal of contaminants and preconcentration. In this study, the effects on optimal work-up procedures of the types of alkylphenols present, their degree of ethoxylation, the biofilm wastewater treatment and the sample matrix were investigated for these purposes. The sampling spot for APEO-containing specimens from an industrial wastewater treatment plant was optimized, including a box that surrounded the tubing outlet carrying the wastewater, to prevent sedimented sludge contaminating the collected samples. Following these changes, the sampling precision (in terms of dry matter content) at a point just under the tubing leading from the biofilm reactors was 0.7% RSD. The findings were applied to develop a work-up procedure for use prior to a high-performance liquid chromatography-fluorescence detection analysis method capable of quantifying nonylphenol polyethoxylates (NPEOs) and poorly investigated dinonylphenol polyethoxylates (DNPEOs) at low microg L(-1) concentrations in effluents from non-activated sludge biofilm reactors. The selected multi-step work-up procedure includes lyophilization and pressurized fluid extraction (PFE) followed by strong ion exchange solid phase extraction (SPE). The yields of the combined procedure, according to tests with NP10EO-spiked effluent from a wastewater treatment plant, were in the 62-78% range. PMID:22519096

  12. Procedure for implementation of temperature-dependent mechanical property capability in the Engineering Analysis Language (EAL) system

    NASA Technical Reports Server (NTRS)

    Glass, David E.; Robinson, James C.

    1990-01-01

    A procedure is presented to allow the use of temperature dependent mechanical properties in the Engineering Analysis Language (EAL) System for solid structural elements. This is accomplished by including a modular runstream in the main EAL runstream. The procedure is applicable for models with multiple materials and with anisotropic properties, and can easily be incorporated into an existing EAL runstream. The procedure (which is applicable for EAL elastic solid elements) is described in detail, followed by a description of the validation of the routine. A listing of the EAL runstream used to validate the procedure is included in the Appendix.

  13. Improved enteral tolerance following step procedure: systematic literature review and meta-analysis.

    PubMed

    Fernandes, Melissa A; Usatin, Danielle; Allen, Isabel E; Rhee, Sue; Vu, Lan

    2016-10-01

    Surgical management of children with short bowel syndrome (SBS) changed with the introduction of the serial transverse enteroplasty procedure (STEP). We conducted a systematic review and meta-analysis using MEDLINE and SCOPUS to determine if children with SBS had improved enteral tolerance following STEP. Studies were included if information about a child's pre- and post-STEP enteral tolerance was provided. A random effects meta-analysis provided a summary estimate of the proportion of children with enteral tolerance increase following STEP. From 766 abstracts, seven case series involving 86 children were included. Mean percent tolerance of enteral nutrition improved from 35.1 to 69.5. Sixteen children had no enteral improvement following STEP. A summary estimate showed that 87 % (95 % CI 77-95 %) of children who underwent STEP had an increase in enteral tolerance. Compilation of the literature supports the belief that SBS subjects' enteral tolerance improves following STEP. Enteral nutritional tolerance is a measure of efficacy of STEP and should be presented as a primary or secondary outcome. By standardizing data collection on children undergoing STEP procedure, better determination of nutritional benefit from STEP can be ascertained. PMID:27461428

  14. Improved enteral tolerance following step procedure: systematic literature review and meta-analysis.

    PubMed

    Fernandes, Melissa A; Usatin, Danielle; Allen, Isabel E; Rhee, Sue; Vu, Lan

    2016-10-01

    Surgical management of children with short bowel syndrome (SBS) changed with the introduction of the serial transverse enteroplasty procedure (STEP). We conducted a systematic review and meta-analysis using MEDLINE and SCOPUS to determine if children with SBS had improved enteral tolerance following STEP. Studies were included if information about a child's pre- and post-STEP enteral tolerance was provided. A random effects meta-analysis provided a summary estimate of the proportion of children with enteral tolerance increase following STEP. From 766 abstracts, seven case series involving 86 children were included. Mean percent tolerance of enteral nutrition improved from 35.1 to 69.5. Sixteen children had no enteral improvement following STEP. A summary estimate showed that 87 % (95 % CI 77-95 %) of children who underwent STEP had an increase in enteral tolerance. Compilation of the literature supports the belief that SBS subjects' enteral tolerance improves following STEP. Enteral nutritional tolerance is a measure of efficacy of STEP and should be presented as a primary or secondary outcome. By standardizing data collection on children undergoing STEP procedure, better determination of nutritional benefit from STEP can be ascertained.

  15. Teaching Core Content Embedded in a Functional Activity to Students with Moderate Intellectual Disability Using a Simultaneous Prompting Procedure

    ERIC Educational Resources Information Center

    Karl, Jennifer; Collins, Belva C.; Hager, Karen D.; Ault, Melinda Jones

    2013-01-01

    The purpose of this study was to investigate the effects of a simultaneous prompting procedure in teaching four secondary students with moderate intellectual disability to acquire and generalize core content embedded in a functional activity. Data gathered within the context of a multiple probe design revealed that all participants learned the…

  16. 45 CFR 660.6 - What procedures apply to the selection of programs and activities under these regulations?

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 45 Public Welfare 3 2012-10-01 2012-10-01 false What procedures apply to the selection of programs and activities under these regulations? 660.6 Section 660.6 Public Welfare Regulations Relating to Public Welfare (Continued) NATIONAL SCIENCE FOUNDATION INTERGOVERNMENTAL REVIEW OF THE NATIONAL...

  17. 45 CFR 660.6 - What procedures apply to the selection of programs and activities under these regulations?

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 45 Public Welfare 3 2013-10-01 2013-10-01 false What procedures apply to the selection of programs and activities under these regulations? 660.6 Section 660.6 Public Welfare Regulations Relating to Public Welfare (Continued) NATIONAL SCIENCE FOUNDATION INTERGOVERNMENTAL REVIEW OF THE NATIONAL...

  18. 45 CFR 660.6 - What procedures apply to the selection of programs and activities under these regulations?

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 45 Public Welfare 3 2011-10-01 2011-10-01 false What procedures apply to the selection of programs and activities under these regulations? 660.6 Section 660.6 Public Welfare Regulations Relating to Public Welfare (Continued) NATIONAL SCIENCE FOUNDATION INTERGOVERNMENTAL REVIEW OF THE NATIONAL...

  19. 45 CFR 660.6 - What procedures apply to the selection of programs and activities under these regulations?

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 45 Public Welfare 3 2014-10-01 2014-10-01 false What procedures apply to the selection of programs and activities under these regulations? 660.6 Section 660.6 Public Welfare Regulations Relating to Public Welfare (Continued) NATIONAL SCIENCE FOUNDATION INTERGOVERNMENTAL REVIEW OF THE NATIONAL...

  20. 31 CFR 1023.520 - Special information sharing procedures to deter money laundering and terrorist activity for...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 31 Money and Finance:Treasury 3 2012-07-01 2012-07-01 false Special information sharing procedures to deter money laundering and terrorist activity for brokers or dealers in securities. 1023.520 Section 1023.520 Money and Finance: Treasury Regulations Relating to Money and Finance (Continued) FINANCIAL CRIMES ENFORCEMENT NETWORK, DEPARTMENT...

  1. 31 CFR 1026.520 - Special information sharing procedures to deter money laundering and terrorist activity for...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 31 Money and Finance:Treasury 3 2012-07-01 2012-07-01 false Special information sharing procedures to deter money laundering and terrorist activity for futures commission merchants and introducing brokers in commodities. 1026.520 Section 1026.520 Money and Finance: Treasury Regulations Relating to Money and Finance (Continued) FINANCIAL...

  2. 31 CFR 1029.520 - Special information sharing procedures to deter money laundering and terrorist activity for loan...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 31 Money and Finance:Treasury 3 2012-07-01 2012-07-01 false Special information sharing procedures to deter money laundering and terrorist activity for loan or finance companies. 1029.520 Section 1029.520 Money and Finance: Treasury Regulations Relating to Money and Finance (Continued) FINANCIAL CRIMES ENFORCEMENT NETWORK, DEPARTMENT OF...

  3. 31 CFR 1028.520 - Special information sharing procedures to deter money laundering and terrorist activity for...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 31 Money and Finance:Treasury 3 2012-07-01 2012-07-01 false Special information sharing procedures to deter money laundering and terrorist activity for operators of credit card systems. 1028.520 Section 1028.520 Money and Finance: Treasury Regulations Relating to Money and Finance (Continued) FINANCIAL CRIMES ENFORCEMENT NETWORK, DEPARTMENT...

  4. 31 CFR 1027.520 - Special information sharing procedures to deter money laundering and terrorist activity for...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 31 Money and Finance:Treasury 3 2012-07-01 2012-07-01 false Special information sharing procedures to deter money laundering and terrorist activity for dealers in precious metals, precious stones, or jewels. 1027.520 Section 1027.520 Money and Finance: Treasury Regulations Relating to Money and Finance (Continued) FINANCIAL CRIMES...

  5. 31 CFR 1024.520 - Special information sharing procedures to deter money laundering and terrorist activity for...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 31 Money and Finance:Treasury 3 2012-07-01 2012-07-01 false Special information sharing procedures to deter money laundering and terrorist activity for mutual funds. 1024.520 Section 1024.520 Money and Finance: Treasury Regulations Relating to Money and Finance (Continued) FINANCIAL CRIMES ENFORCEMENT NETWORK, DEPARTMENT OF THE TREASURY RULES...

  6. 31 CFR 1025.520 - Special information sharing procedures to deter money laundering and terrorist activity for...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 31 Money and Finance:Treasury 3 2012-07-01 2012-07-01 false Special information sharing procedures to deter money laundering and terrorist activity for insurance companies. 1025.520 Section 1025.520 Money and Finance: Treasury Regulations Relating to Money and Finance (Continued) FINANCIAL CRIMES ENFORCEMENT NETWORK, DEPARTMENT OF THE...

  7. 45 CFR 660.6 - What procedures apply to the selection of programs and activities under these regulations?

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 45 Public Welfare 3 2010-10-01 2010-10-01 false What procedures apply to the selection of programs and activities under these regulations? 660.6 Section 660.6 Public Welfare Regulations Relating to Public Welfare (Continued) NATIONAL SCIENCE FOUNDATION INTERGOVERNMENTAL REVIEW OF THE NATIONAL...

  8. 44 CFR 4.6 - What procedures apply to the selection of programs and activities under these regulations?

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 44 Emergency Management and Assistance 1 2013-10-01 2013-10-01 false What procedures apply to the selection of programs and activities under these regulations? 4.6 Section 4.6 Emergency Management and Assistance FEDERAL EMERGENCY MANAGEMENT AGENCY, DEPARTMENT OF HOMELAND SECURITY GENERAL...

  9. 44 CFR 4.6 - What procedures apply to the selection of programs and activities under these regulations?

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 44 Emergency Management and Assistance 1 2014-10-01 2014-10-01 false What procedures apply to the selection of programs and activities under these regulations? 4.6 Section 4.6 Emergency Management and Assistance FEDERAL EMERGENCY MANAGEMENT AGENCY, DEPARTMENT OF HOMELAND SECURITY GENERAL...

  10. 44 CFR 4.6 - What procedures apply to the selection of programs and activities under these regulations?

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 44 Emergency Management and Assistance 1 2012-10-01 2011-10-01 true What procedures apply to the selection of programs and activities under these regulations? 4.6 Section 4.6 Emergency Management and Assistance FEDERAL EMERGENCY MANAGEMENT AGENCY, DEPARTMENT OF HOMELAND SECURITY GENERAL...

  11. 44 CFR 4.6 - What procedures apply to the selection of programs and activities under these regulations?

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 44 Emergency Management and Assistance 1 2011-10-01 2011-10-01 false What procedures apply to the selection of programs and activities under these regulations? 4.6 Section 4.6 Emergency Management and Assistance FEDERAL EMERGENCY MANAGEMENT AGENCY, DEPARTMENT OF HOMELAND SECURITY GENERAL...

  12. Small Schools Mathematics Curriculum, 4-6: Reading, Language Arts, Mathematics, Science, Social Studies. Scope, Objectives, Activities, Resources, Monitoring Procedures.

    ERIC Educational Resources Information Center

    Hartl, David, Ed.; And Others

    The Washington grade 4-6 mathematics curriculum is organized according to the Small Schools Materials format which lists the sequence of learning objectives related to a specific curriculum area, recommends a teaching and mastery grade placement, and identifies activities, monitoring procedures and possible resources used in teaching to the…

  13. 40 CFR 29.6 - What procedures apply to the selection of programs and activities under these regulations?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 1 2012-07-01 2012-07-01 false What procedures apply to the selection of programs and activities under these regulations? 29.6 Section 29.6 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY GENERAL INTERGOVERNMENTAL REVIEW OF ENVIRONMENTAL PROTECTION AGENCY PROGRAMS...

  14. 40 CFR 29.6 - What procedures apply to the selection of programs and activities under these regulations?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 1 2011-07-01 2011-07-01 false What procedures apply to the selection of programs and activities under these regulations? 29.6 Section 29.6 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY GENERAL INTERGOVERNMENTAL REVIEW OF ENVIRONMENTAL PROTECTION AGENCY PROGRAMS...

  15. 40 CFR 29.6 - What procedures apply to the selection of programs and activities under these regulations?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 1 2010-07-01 2010-07-01 false What procedures apply to the selection of programs and activities under these regulations? 29.6 Section 29.6 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY GENERAL INTERGOVERNMENTAL REVIEW OF ENVIRONMENTAL PROTECTION AGENCY PROGRAMS...

  16. 40 CFR 29.6 - What procedures apply to the selection of programs and activities under these regulations?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 1 2013-07-01 2013-07-01 false What procedures apply to the selection of programs and activities under these regulations? 29.6 Section 29.6 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY GENERAL INTERGOVERNMENTAL REVIEW OF ENVIRONMENTAL PROTECTION AGENCY PROGRAMS...

  17. 40 CFR 29.6 - What procedures apply to the selection of programs and activities under these regulations?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 1 2014-07-01 2014-07-01 false What procedures apply to the selection of programs and activities under these regulations? 29.6 Section 29.6 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY GENERAL INTERGOVERNMENTAL REVIEW OF ENVIRONMENTAL PROTECTION AGENCY PROGRAMS...

  18. 31 CFR 1022.520 - Special information sharing procedures to deter money laundering and terrorist activity for money...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 31 Money and Finance:Treasury 3 2014-07-01 2014-07-01 false Special information sharing procedures to deter money laundering and terrorist activity for money services businesses. 1022.520 Section 1022.520 Money and Finance: Treasury Regulations Relating to Money and Finance (Continued) FINANCIAL...

  19. 31 CFR 1022.520 - Special information sharing procedures to deter money laundering and terrorist activity for money...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 31 Money and Finance:Treasury 3 2011-07-01 2011-07-01 false Special information sharing procedures to deter money laundering and terrorist activity for money services businesses. 1022.520 Section 1022.520 Money and Finance: Treasury Regulations Relating to Money and Finance (Continued) FINANCIAL...

  20. 31 CFR 1022.520 - Special information sharing procedures to deter money laundering and terrorist activity for money...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 31 Money and Finance:Treasury 3 2013-07-01 2013-07-01 false Special information sharing procedures to deter money laundering and terrorist activity for money services businesses. 1022.520 Section 1022.520 Money and Finance: Treasury Regulations Relating to Money and Finance (Continued) FINANCIAL...

  1. 31 CFR 1022.520 - Special information sharing procedures to deter money laundering and terrorist activity for money...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 31 Money and Finance:Treasury 3 2012-07-01 2012-07-01 false Special information sharing procedures to deter money laundering and terrorist activity for money services businesses. 1022.520 Section 1022.520 Money and Finance: Treasury Regulations Relating to Money and Finance (Continued) FINANCIAL...

  2. Ultrasonic dispersion of soils for routine particle size analysis: recommended procedures

    SciTech Connect

    Heller, P.R.; Hayden, R.E.; Gee, G.W.

    1984-11-01

    Ultrasonic techniques were found to be more effective than standard mechanical techniques to disperse soils for routine particle-size analysis (i.e., using a dispersing agent and mechanical mixing). Soil samples were tested using an ultrasonic homogenizer at various power outputs. The samples varied widely in texture and mineralogy, and included sands, silts, clays, volcanic soils, and soils high in organic matter. A combination of chemical and ultrasonic dispersion techniques were used in all tests. Hydrometer techniques were used for particle-size analysis. For most materials tested, clay percentage values indicated that ultrasonic dispersion was more complete than mechanical dispersion. Soils high in volcanic ash or iron oxides showed 10 to 20 wt % more clay when using ultrasonic mixing rather than mechanical mixing. The recommended procedure requires ultrasonic dispersion of a 20- to 40-g sample for 15 min at 300 W with a 1.9-cm-diameter ultrasonic homogenizer. 12 references, 5 figures, 1 table.

  3. Critical review of some multivariate procedures in the analysis of geochemical data

    USGS Publications Warehouse

    Miesch, A.T.

    1969-01-01

    Simulation experiments have been conducted to examine the potential usefulness of R-mode and Q-mode factor methods in the analysis and interpretation of geochemical data. The R-mode factor analysis experiment consisted of constructing a factor model, using the model to generate a correlation matrix, and attempting to recover the model by R-mode techniques. The techniques were successful in determining the number of factors in the model, but the factor loadings could not be estimated even approximately on the basis of mathematical procedures alone. Q-mode factor methods were successful in recovering all of the properties of a model used to generate hypothetical chemical data on olivine samples, but it was necessary to use a correction previously regarded as unimportant. ?? 1969 Plenum Publishing Corporation.

  4. Development of a Quality Assurance Procedure for Dose Volume Histogram Analysis

    NASA Astrophysics Data System (ADS)

    Davenport, David A.

    The role of the dose-volume histogram (DVH) is rapidly expanding in radiation oncology treatment planning. DVHs are already relied upon to differentiate between two similar plans and evaluate organ-at-risk dosage. Their role will become even more important as progress continues towards implementing biologically based treatment planning systems. Therefore it is imperative that the accuracy of DVHs is evaluated and reappraised after any major software or hardware upgrades, affecting a treatment planning system (TPS). The purpose of this work is to create and implement a comprehensive quality assurance procedure evaluating dose volume histograms to insure their accuracy while satisfying American College of Radiology guidelines. Virtual phantoms of known volumes were created in Pinnacle TPS and exposed to different beam arrangements. Variables including grid size and slice thickness were varied and their effects were analyzed. The resulting DVHs were evaluated by comparison to the commissioned percent depth dose values using a custom Excel spreadsheet. After determining the uncertainty of the DVH based on these variables, multiple second check calculations were performed using MIM Maestro and Matlab software packages. The uncertainties of the DVHs were shown to be less than +/- 3%. The average uncertainty was shown to be less than +/- 1%. The second check procedures resulted in mean percent differences less than 1% which confirms the accuracy of DVH calculation in Pinnacle and the effectiveness of the quality assurance template. The importance of knowing the limits of accuracy of the DVHs, which are routinely used to assess the quality of clinical treatment plans, cannot be overestimated. The developed comprehensive QA procedure evaluating the accuracy of the DVH statistical analysis will become a part of our clinical arsenal for periodic tests of the treatment planning system. It will also be performed at the time of commissioning and after any major software

  5. The Relationship Between Relative Value Units and Outcomes: A Multivariate Analysis of Plastic Surgery Procedures

    PubMed Central

    Nguyen, Khang T.; Gart, Michael S.; Smetona, John T.; Aggarwal, Apas; Bilimoria, Karl Y.; Kim, John Y. S.

    2012-01-01

    Introduction: Relative value units (RVUs) were developed as a quantifier of requisite training, knowledge, and technical expertise for performing various procedures. In select procedures, increasing RVUs have been shown to substitute well for increasing surgical complexity and have been linked to greater risk of complications. The relationship of RVU to outcomes has yet to be examined in the plastic surgery population. Methods: This study analyzed nearly 15,000 patients from a standardized, multicenter database to better define the link between RVUs and outcomes in this surgical population. The American College of Surgeons’ National Surgical Quality Improvement Program was retrospectively reviewed from 2006 to 2010. Results: A total of 14,936 patients undergoing primary procedures of plastic surgery were identified. Independent risk factors for complications were analyzed using multivariable logistic regression. A unit increase in RVUs was associated with a 1.7% increase in the odds of overall complications and 1.0% increase in the odds of surgical site complications but did not predict mortality or reoperation. A unit increase in RVUs was also associated with a prolongation of operative time by 0.41 minutes, but RVUs only accounted for 15.6% of variability in operative times. Conclusions: In the plastic surgery population, increasing RVUs correlates with increased risks of overall complications and surgical site complications. While increasing RVUs may independently prolong operative times, they only accounted for 15.6% of observed variance, indicating that other factors are clearly involved. These findings must be weighed against the benefits of performing more complex surgeries, including time and cost savings, and considered in each patient's risk-benefit analysis. PMID:23308307

  6. A comparative analysis of British and Taiwanese students' conceptual and procedural knowledge of fraction addition

    NASA Astrophysics Data System (ADS)

    Li, Hui-Chuan

    2014-10-01

    This study examines students' procedural and conceptual achievement in fraction addition in England and Taiwan. A total of 1209 participants (561 British students and 648 Taiwanese students) at ages 12 and 13 were recruited from England and Taiwan to take part in the study. A quantitative design by means of a self-designed written test is adopted as central to the methodological considerations. The test has two major parts: the concept part and the skill part. The former is concerned with students' conceptual knowledge of fraction addition and the latter is interested in students' procedural competence when adding fractions. There were statistically significant differences both in concept and skill parts between the British and Taiwanese groups with the latter having a higher score. The analysis of the students' responses to the skill section indicates that the superiority of Taiwanese students' procedural achievements over those of their British peers is because most of the former are able to apply algorithms to adding fractions far more successfully than the latter. Earlier, Hart [1] reported that around 30% of the British students in their study used an erroneous strategy (adding tops and bottoms, for example, 2/3 + 1/7 = 3/10) while adding fractions. This study also finds that nearly the same percentage of the British group remained using this erroneous strategy to add fractions as Hart found in 1981. The study also provides evidence to show that students' understanding of fractions is confused and incomplete, even those who are successfully able to perform operations. More research is needed to be done to help students make sense of the operations and eventually attain computational competence with meaningful grounding in the domain of fractions.

  7. Sensitivity analysis aimed at blood vessels detection using interstitial optical tomography during brain needle biopsy procedures.

    PubMed

    Pichette, Julien; Goyette, Andréanne; Picot, Fabien; Tremblay, Marie-Andrée; Soulez, Gilles; Wilson, Brian C; Leblond, Frédéric

    2015-11-01

    A brain needle biopsy procedure is performed for suspected brain lesions in order to sample tissue that is subsequently analysed using standard histopathology techniques. A common complication resulting from this procedure is brain hemorrhaging from blood vessels clipped off during tissue extraction. Interstitial optical tomography (iOT) has recently been introduced by our group as a mean to assess the presence of blood vessels in the vicinity of the needle. The clinical need to improve safety requires the detection of blood vessels within 2 mm from the outer surface of the needle, since this distance is representative of the volume of tissue that is aspirated durirng tissue extraction. Here, a sensitivity analysis is presented to establish the intrinsic detection limits of iOT based on simulations and experiments using brain tissue phantoms. It is demonstrated that absorbers can be detected with diameters >300 μm located up to >2 mm from the biopsy needle core for bulk optical properties consistent with brain tissue.

  8. Sensitivity analysis aimed at blood vessels detection using interstitial optical tomography during brain needle biopsy procedures.

    PubMed

    Pichette, Julien; Goyette, Andréanne; Picot, Fabien; Tremblay, Marie-Andrée; Soulez, Gilles; Wilson, Brian C; Leblond, Frédéric

    2015-11-01

    A brain needle biopsy procedure is performed for suspected brain lesions in order to sample tissue that is subsequently analysed using standard histopathology techniques. A common complication resulting from this procedure is brain hemorrhaging from blood vessels clipped off during tissue extraction. Interstitial optical tomography (iOT) has recently been introduced by our group as a mean to assess the presence of blood vessels in the vicinity of the needle. The clinical need to improve safety requires the detection of blood vessels within 2 mm from the outer surface of the needle, since this distance is representative of the volume of tissue that is aspirated durirng tissue extraction. Here, a sensitivity analysis is presented to establish the intrinsic detection limits of iOT based on simulations and experiments using brain tissue phantoms. It is demonstrated that absorbers can be detected with diameters >300 μm located up to >2 mm from the biopsy needle core for bulk optical properties consistent with brain tissue. PMID:26600990

  9. Data acquisition and analysis procedures for high-resolution atomic force microscopy in three dimensions.

    PubMed

    Albers, Boris J; Schwendemann, Todd C; Baykara, Mehmet Z; Pilet, Nicolas; Liebmann, Marcus; Altman, Eric I; Schwarz, Udo D

    2009-07-01

    Data acquisition and analysis procedures for noncontact atomic force microscopy that allow the recording of dense three-dimensional (3D) surface force and energy fields with atomic resolution are presented. The main obstacles for producing high-quality 3D force maps are long acquisition times that lead to data sets being distorted by drift, and tip changes. Both problems are reduced but not eliminated by low-temperature operation. The procedures presented here employ an image-by-image data acquisition scheme that cuts measurement times by avoiding repeated recording of redundant information, while allowing post-acquisition drift correction. All steps are detailed with the example of measurements performed on highly oriented pyrolytic graphite in ultrahigh vacuum at a temperature of 6 K. The area covered spans several unit cells laterally and vertically from the attractive region to where no force could be measured. The resulting fine data mesh maps piconewton forces with <7 pm lateral and<2 pm vertical resolution. From this 3D data set, two-dimensional cuts along any plane can be plotted. Cuts in a plane parallel to the sample surface show atomic resolution, while cuts along the surface normal visualize how the attractive atomic force fields extend into vacuum. At the same time, maps of the tip-sample potential energy, the lateral tip-sample forces, and the energy dissipated during cantilever oscillation can be produced with identical resolution.

  10. Sensitivity analysis aimed at blood vessels detection using interstitial optical tomography during brain needle biopsy procedures

    PubMed Central

    Pichette, Julien; Goyette, Andréanne; Picot, Fabien; Tremblay, Marie-Andrée; Soulez, Gilles; Wilson, Brian C.; Leblond, Frédéric

    2015-01-01

    A brain needle biopsy procedure is performed for suspected brain lesions in order to sample tissue that is subsequently analysed using standard histopathology techniques. A common complication resulting from this procedure is brain hemorrhaging from blood vessels clipped off during tissue extraction. Interstitial optical tomography (iOT) has recently been introduced by our group as a mean to assess the presence of blood vessels in the vicinity of the needle. The clinical need to improve safety requires the detection of blood vessels within 2 mm from the outer surface of the needle, since this distance is representative of the volume of tissue that is aspirated durirng tissue extraction. Here, a sensitivity analysis is presented to establish the intrinsic detection limits of iOT based on simulations and experiments using brain tissue phantoms. It is demonstrated that absorbers can be detected with diameters >300 μm located up to >2 mm from the biopsy needle core for bulk optical properties consistent with brain tissue. PMID:26600990

  11. Light Water Reactor Sustainability Program: Computer-Based Procedures for Field Activities: Results from Three Evaluations at Nuclear Power Plants

    SciTech Connect

    Oxstrand, Johanna; Le Blanc, Katya; Bly, Aaron

    2014-09-01

    The Computer-Based Procedure (CBP) research effort is a part of the Light-Water Reactor Sustainability (LWRS) Program, which is a research and development (R&D) program sponsored by Department of Energy (DOE) and performed in close collaboration with industry R&D programs that provides the technical foundations for licensing and managing the long-term, safe, and economical operation of current nuclear power plants. One of the primary missions of the LWRS program is to help the U.S. nuclear industry adopt new technologies and engineering solutions that facilitate the continued safe operation of the plants and extension of the current operating licenses. One area that could yield tremendous savings in increased efficiency and safety is in improving procedure use. Nearly all activities in the nuclear power industry are guided by procedures, which today are printed and executed on paper. This paper-based procedure process has proven to ensure safety; however, there are improvements to be gained. Due to its inherent dynamic nature, a CBP provides the opportunity to incorporate context driven job aids, such as drawings, photos, and just-in-time training. Compared to the static state of paper-based procedures (PBPs), the presentation of information in CBPs can be much more flexible and tailored to the task, actual plant condition, and operation mode. The dynamic presentation of the procedure will guide the user down the path of relevant steps, thus minimizing time spent by the field worker to evaluate plant conditions and decisions related to the applicability of each step. This dynamic presentation of the procedure also minimizes the risk of conducting steps out of order and/or incorrectly assessed applicability of steps.

  12. Neutron Activation Analysis of Water - A Review

    NASA Technical Reports Server (NTRS)

    Buchanan, John D.

    1971-01-01

    Recent developments in this field are emphasized. After a brief review of basic principles, topics discussed include sources of neutrons, pre-irradiation physical and chemical treatment of samples, neutron capture and gamma-ray analysis, and selected applications. Applications of neutron activation analysis of water have increased rapidly within the last few years and may be expected to increase in the future.

  13. Calculation procedures for the analysis of integral experiments for fusion-reactor design

    NASA Astrophysics Data System (ADS)

    Santoro, R. T.; Barnes, J. M.; Alsmiller, R. G., Jr.; Oblow, E. M.

    1981-07-01

    The calculational models, nuclear data, and radiation transport codes that are used in the analysis of integral measurements of the transport of approxmately 14 MeV neutrons through laminated slabs of materials typical of those found in fusion reactor shields are described. The two dimensional discrete ordinates calculations to optimize the experimental configuration of reducing the neutron and gamma ray background levels and for obtaining and equivalent, reduced geometry of the calculational model to reduce computer core storage and running times are also presented. The equations and data to determine the energy-angle relations to neutrons produced in the reactions of 250 keV deuterons in a titanium tritide target are given. The procedures used to collapse the 17ln 36gamma VATAMIN C cross section library to a 53n 21 gamma broad group library are described.

  14. Simplified inelastic analysis procedure to evaluate a butt-welded elbow end

    SciTech Connect

    Dhalla, A.K.

    1981-01-01

    In a thin-walled piping network, the end of an elbow welded to a straignt pipe constitutes one of the highly stressed cross-sections that require structural evaluation. Explicit rules are not provided in the ASME Code for structural evaluation of the elbow ovalization and fabrication effects at the welded end. This paper presents a conservative semi-analytical procedure that can be used with simplified inelastic analysis to evaluate the elbow cross section welded to the straight pipe. The concept of carry-over factors is used to obtain ovalization stresses or strains at the elbow end. The stresses introduced by material and geometric nonuniformities in the fabrication process are then added to the ovalization stresses to complete structural evluation of the girth butt-welded elbow joint.

  15. Monitoring and analysis of gravel-packing procedures to explain well performance

    SciTech Connect

    McLeod, H.O. Jr. ); Minarovic, M.J. )

    1994-10-01

    Gravel-packed gas wells completed in the Gulf of Mexico since 1980 were reviewed to build a selective database for a completion-effectiveness study. Gas wells with clean, uniform sands were selected for analysis. Significant monitoring data identified were injectivity tests at different points during the completion and fluid loss rates (barrels per hour). Injectivity before gravel packing and productivity after gravel packing were classified according to sidewall-core permeabilities. Different gravel-pack preparation and execution techniques were reviewed. Fluid-loss-control pills were identified as the greatest source of damage restricting gravel-packed well productivity. Injectivity tests and sidewall-core permeabilities provide valuable information for monitoring well completion procedures.

  16. An automated procedure for the analysis of time-resolved Schottky spectra

    NASA Astrophysics Data System (ADS)

    Bühler, Paul

    2016-05-01

    The unique combination of facilities and instrumentation available at the GSI Helmholtzzentrum für Schwerionenforschung in Darmstadt, Germany allows us to investigate the decay modes of highly charged ions by Schottky Mass Spectrometry. In single-ion decay spectrometry the fate of single ions cruising in the cooler-storage ring ESR can be followed and their exact decay time is determined. For a fast and repeated analysis of such data sets a highly automated procedure has been developed. The method is demonstrated with a measurement of the He-like 142Pm59+ which decays by electron-capture and β+ decay to 142Nd. For the total decay constant we find a value of λ=0.0164±0.0010 s-1 in the rest frame of the ions and the branching ratio λβ+ /λEC = 3.68 ± 0.014.

  17. A Procedure for 3-D Contact Stress Analysis of Spiral Bevel Gears

    NASA Technical Reports Server (NTRS)

    Kumar, A.; Bibel, G.

    1994-01-01

    Contact stress distribution of spiral bevel gears using nonlinear finite element static analysis is presented. Procedures have been developed to solve the nonlinear equations that identify the gear and pinion surface coordinates based on the kinematics of the cutting process and orientate the pinion and the gear in space to mesh with each other. Contact is simulated by connecting GAP elements along the intersection of a line from each pinion point (parallel to the normal at the contact point) with the gear surface. A three dimensional model with four gear teeth and three pinion teeth is used to determine the contact stresses at two different contact positions in a spiral bevel gearset. A summary of the elliptical contact stress distribution is given. This information will be helpful to helicopter and aircraft transmission designers who need to minimize weight of the transmission and maximize reliability.

  18. [Discussion of the validation of thin-layer chromatographic procedure in pharmaceutical analysis].

    PubMed

    Lin, L; Zhang, J

    1997-07-01

    TLC has become a convenient, fast, robust, and cost efficient technique in pharmaceutical analysis because of its coherent features. The important analytical performance parameters are summarized, such as accuracy, precision, reproducibility, specificity, detection limit, quantitation limit, linearity and range, etc. The validation methods and the acceptance criteria are discussed for sample preparation, sample stability in solution and on plate, sample exposure time, robustness and quality of stationary and mobile phases, application of sample to plate, spot dimension and shape, temperature, humidity and chamber saturation, detection, and quantitation, etc. It is recommended that the definition of the validation parameters and the validation procedures for quantitative TLC should be described whenever the results are reported. PMID:15739462

  19. Analysis of Convair 990 rejected-takeoff accident with emphasis on decision making, training and procedures

    NASA Technical Reports Server (NTRS)

    Batthauer, Byron E.

    1987-01-01

    This paper analyzes a NASA Convair 990 (CV-990) accident with emphasis on rejected-takeoff (RTO) decision making, training, procedures, and accident statistics. The NASA Aircraft Accident Investigation Board was somewhat perplexed that an aircraft could be destroyed as a result of blown tires during the takeoff roll. To provide a better understanding of tire failure RTO's, The Board obtained accident reports, Federal Aviation Administration (FAA) studies, and other pertinent information related to the elements of this accident. This material enhanced the analysis process and convinced the Accident Board that high-speed RTO's in transport aircraft should be given more emphasis during pilot training. Pilots should be made aware of various RTO situations and statistics with emphasis on failed-tire RTO's. This background information could enhance the split-second decision-making process that is required prior to initiating an RTO.

  20. A single extraction and HPLC procedure for simultaneous analysis of phytosterols, tocopherols and lutein in soybeans.

    PubMed

    Slavin, Margaret; Yu, Liangli Lucy

    2012-12-15

    A saponification/extraction procedure and high performance liquid chromatography (HPLC) analysis method were developed and validated for simultaneous analysis of phytosterols, tocopherols and lutein (a carotenoid) in soybeans. Separation was achieved on a phenyl column with a ternary, isocratic solvent system of acetonitrile, methanol and water (48:22.5:29.5, v/v/v). Evaporative light scattering detection (ELSD) was used to quantify β-sitosterol, stigmasterol, campesterol, and α-, δ- and γ-tocopherols, while lutein was quantified with visible light absorption at 450 nm. Peak identification was verified by retention times and spikes with external standards. Standard curves were constructed (R(2)>0.99) to allow for sample quantification. Recovery of the saponification and extraction was demonstrated via analysis of spiked samples. Also, the accuracy of results of four soybeans using the described saponification and HPLC analytical method was validated against existing methods. This method offers a more efficient alternative to individual methods for quantifying lutein, tocopherols and sterols in soybeans.

  1. Maori heads (mokomokai): the usefulness of a complete forensic analysis procedure.

    PubMed

    Charlier, Philippe; Huynh-Charlier, Isabelle; Brun, Luc; Champagnat, Julie; Laquay, Laetitia; Hervé, Christian

    2014-09-01

    Based on an analysis of 19 mummified Maori heads (mokomokai) referred to our forensic laboratory for anthropological analysis prior to their official repatriation from France to New Zealand, and data from the anthropological and medical literature, we propose a complete forensic procedure for the analysis of such pieces. A list of 12 original morphological criteria was developed. Items included the sex, age at death, destruction of the skull base, the presence of argil deposits in the inner part of the skull, nostrils closed with exogenous material, sewing of eyelids and lips, pierced earlobes, ante-mortem and/or post-mortem tattoos, the presence of vegetal fibers within nasal cavities, and other pathological or anthropological anomalies. These criteria were tested for all 19 mokomokai repatriated to New Zealand by the French authorities. Further complementary analyses were limited to fiberscopic examination of the intracranial cavities because of the taboo on any sampling requested by the Maori authorities. In the context of global repatriation of human artifacts to native communities, this type of anthropological expertise is increasingly frequently requested of forensic anthropologists and other practitioners. We discuss the reasons for and against repatriating non-authentic artifacts to such communities and the role played by forensic anthropologists during the authentication process.

  2. A procedure to find thermodynamic equilibrium constants for CO2 and CH4 adsorption on activated carbon.

    PubMed

    Trinh, T T; van Erp, T S; Bedeaux, D; Kjelstrup, S; Grande, C A

    2015-03-28

    Thermodynamic equilibrium for adsorption means that the chemical potential of gas and adsorbed phase are equal. A precise knowledge of the chemical potential is, however, often lacking, because the activity coefficient of the adsorbate is not known. Adsorption isotherms are therefore commonly fitted to ideal models such as the Langmuir, Sips or Henry models. We propose here a new procedure to find the activity coefficient and the equilibrium constant for adsorption which uses the thermodynamic factor. Instead of fitting the data to a model, we calculate the thermodynamic factor and use this to find first the activity coefficient. We show, using published molecular simulation data, how this procedure gives the thermodynamic equilibrium constant and enthalpies of adsorption for CO2(g) on graphite. We also use published experimental data to find similar thermodynamic properties of CO2(g) and of CH4(g) adsorbed on activated carbon. The procedure gives a higher accuracy in the determination of enthalpies of adsorption than ideal models do.

  3. New advancements in the analysis procedures of the electrochemical hydrogen permeation experimental data

    NASA Astrophysics Data System (ADS)

    Al-Faqeer, Faisal M.

    This thesis presents two major breakthroughs on the analysis procedures of the hydrogen permeation data of the electrochemical hydrogen permeation technique to determine all relevant parameters for the hydrogen evolution reaction (HER) and hydrogen absorption reaction (HAR). These include major modifications to the original Iyer-Pickering-Zamanzadeh (IPZ) analysis. The first advancement was modifying the original IPZ analysis for competitive adsorption by including the surface coverage of a second adsorbate. This modification was applied to experimental data from the literature where the effect of iodide ions on HER and HAR was studied and qualitatively evaluated using the original IPZ analysis which ignores the surface coverage of iodide ions and to experimental data carried out in this research on the effect of hexamethylenetetramine, HMTA, on HER and HAR. The new analysis was able to evaluate all relevant parameters which include the exchange current density of the HER, i o, the discharge rate constant, k1, the recombination rate constant, k2, the hydrogen surface coverage, thetaH, and the kinetic-diffusion constant, k, which includes the absorption rate constant, k abs, the desorption rate constant, kdes, hydrogen diffusivity, DH, and the membrane thickness, L, in addition to the surface coverage of iodide ions, theta I-, and HMTA, thetaHMTA. The theta I- and thetaHMTA values were also determined using EQCM and polarization date and showed reasonable agreement with the one determined by the new IPZ analysis. The second advancement was modifying the IPZ analysis to include the thickness effect so that the analysis will be able to evaluate all the parameters including the kabs and k des instead of determining k using one membrane thickness. The original IPZ analysis can evaluate kabs and kdes only if at least three thicknesses are used to evaluate k. This modification will still keep the competitive adsorption conditions and will be bale to determine the surface

  4. Heavy Metal and Trace Metal Analysis in Soil by Sequential Extraction: A Review of Procedures

    PubMed Central

    Zimmerman, Amanda Jo; Weindorf, David C.

    2010-01-01

    Quantification of heavy and trace metal contamination in soil can be arduous, requiring the use of lengthy and intricate extraction procedures which may or may not give reliable results. Of the many procedures in publication, some are designed to operate within specific parameters while others are designed for more broad application. Most procedures have been modified since their inception which creates ambiguity as to which procedure is most acceptable in a given situation. For this study, the Tessier, Community Bureau of Reference (BCR), Short, Galán, and Geological Society of Canada (GCS) procedures were examined to clarify benefits and limitations of each. Modifications of the Tessier, BCR, and GCS procedures were also examined. The efficacy of these procedures is addressed by looking at the soils used in each procedure, the limitations, applications, and future of sequential extraction. PMID:20414344

  5. Effects of an Activity-Based Anorexia Procedure on Within-Session Changes in Nose-Poke Responding

    ERIC Educational Resources Information Center

    Aoyama, Kenjiro

    2012-01-01

    This study tested the effects of an activity-based anorexia (ABA) procedure on within-session changes in responding. In the ABA group (N = 8), rats were given a 60-min feeding session and allowed to run in a running wheel for the remainder of each day. During the daily 60-min feeding session, each nose-poke response was reinforced by a food…

  6. Supervised pattern recognition procedures for discrimination of whiskeys from gas chromatography/mass spectrometry congener analysis.

    PubMed

    González-Arjona, Domingo; López-Pérez, Germán; González-Gallero, Víctor; González, A Gustavo

    2006-03-22

    The volatile congener analysis of 52 commercialized whiskeys (24 samples of single malt Scotch whiskey, 18 samples of bourbon whiskey, and 10 samples of Irish whiskey) was carried out by gas chromatography/mass spectrometry after liquid-liquid extraction with dichloromethane. Pattern recognition procedures were applied for discrimination of different whiskey categories. Multivariate data analysis includes linear discriminant analysis (LDA), k nearest neighbors (KNN), soft independent modeling of class analogy (SIMCA), procrustes discriminant analysis (PDA), and artificial neural networks techniques involving multilayer perceptrons (MLP) and probabilistic neural networks (PNN). Classification rules were validated by considering the number of false positives (FPs) and false negatives (FNs) of each class associated to the prediction set. Artificial neural networks led to the best results because of their intrinsic nonlinear features. Both techniques, MLP and PNN, gave zero FPs and zero FNs for all of the categories. KNN is a nonparametric method that also provides zero FPs and FNs for every class but only when selecting K = 3 neighbors. PDA produced good results also (zero FPs and FNs always) but only by selecting nine principal components for class modeling. LDA shows a lesser classification performance, because of the building of linear frontiers between classes that does not apply in many real situations. LDA led to one FP for bourbons and one FN for scotches. The worse results were obtained with SIMCA, which gave a higher number of FPs (five for both scotches and bourbons) and FNs (six for scotchs and two for bourbons). The possible cause of these findings is the strong influence of class inhomogeneities on the SIMCA performance. It is remarkable that in any case, all of the methodologies lead to zero FPs and FNs for the Irish whiskeys.

  7. Supervised pattern recognition procedures for discrimination of whiskeys from gas chromatography/mass spectrometry congener analysis.

    PubMed

    González-Arjona, Domingo; López-Pérez, Germán; González-Gallero, Víctor; González, A Gustavo

    2006-03-22

    The volatile congener analysis of 52 commercialized whiskeys (24 samples of single malt Scotch whiskey, 18 samples of bourbon whiskey, and 10 samples of Irish whiskey) was carried out by gas chromatography/mass spectrometry after liquid-liquid extraction with dichloromethane. Pattern recognition procedures were applied for discrimination of different whiskey categories. Multivariate data analysis includes linear discriminant analysis (LDA), k nearest neighbors (KNN), soft independent modeling of class analogy (SIMCA), procrustes discriminant analysis (PDA), and artificial neural networks techniques involving multilayer perceptrons (MLP) and probabilistic neural networks (PNN). Classification rules were validated by considering the number of false positives (FPs) and false negatives (FNs) of each class associated to the prediction set. Artificial neural networks led to the best results because of their intrinsic nonlinear features. Both techniques, MLP and PNN, gave zero FPs and zero FNs for all of the categories. KNN is a nonparametric method that also provides zero FPs and FNs for every class but only when selecting K = 3 neighbors. PDA produced good results also (zero FPs and FNs always) but only by selecting nine principal components for class modeling. LDA shows a lesser classification performance, because of the building of linear frontiers between classes that does not apply in many real situations. LDA led to one FP for bourbons and one FN for scotches. The worse results were obtained with SIMCA, which gave a higher number of FPs (five for both scotches and bourbons) and FNs (six for scotchs and two for bourbons). The possible cause of these findings is the strong influence of class inhomogeneities on the SIMCA performance. It is remarkable that in any case, all of the methodologies lead to zero FPs and FNs for the Irish whiskeys. PMID:16536565

  8. [Analysis of the efficiency and influence factors of PBSC collection with AutoPBSC and MNC procedure of cell separator].

    PubMed

    Zeng, Feng; Wei, Shi-Jing; Huang, Hao-Bo; Huang, Qing-Hua; Lin, Qiu-Yan; Fan, Li-Ping; Huang, Hui-Wen; Fu, Dan-Hui

    2014-12-01

    This study was aimed to analyze the efficiency and influence factors of PBSC collection by an automatic (AutoPBSC procedure) and a semiautomatic apheresis procedure ( MNC procedure) of COBE Spectra cell separators. According to the different objects, A total of 109 apheresis cases were divided into autologous cohort (patient) and allogeneic cohort (donor). The quantity and quality of the collections and the characteristics of apheresis procedure were compared, the yields and influence factors of two cohorts with two kinds of procedures were analyzed respectively. The results showed that the collections of two procedure in patients and donors which processed the similar blood volumes were insignificantly different in MNC%, CD34⁺ %, CD34⁺ cell counts and Hb concentration (P > 0.05) ; the collections by AutoPBSC procedure had got fewer platelets, less product volumes whereas more ACD-A used, longer apheresis time in comparison with MNC procedure (P < 0.05). Correlation analysis indicated that MNC (r = 0.314,P = 0.015) , CD34⁺ cell counts (r = 0.922, P = 0.000) in collections were positively correlated with preahperesis in the autologus cohort by two procedures, CD34⁺ cell counts were correlated with WBC (r = 0.369, P = 0.004) and MNC (r = 0.495,P = 0.000) in collections; MNC (r = 0.896, P = 0.000) was positive correlated with preahperesis by AutoPBSC procedures and CD34⁺ cell counts also (r = 0.666,P = 0.000) by MNC procedure in the allogeneic cohort. Male had got more MNC and CD34⁺ cell counts than female (P < 0.05), age ≤ 40 had got more MNC and CD34⁺ cell counts than age>40 (P < 0.05) in patients by AutoPBSC procedure; age > 40 had got more CD34⁺ cell counts than age ≤ 40 by MNC procedure(P < 0.05). Only male had got more MNC and CD34⁺ cell counts than female (P < 0.05) by MNC procedure in donors. It is concluded that with same amount of blood processing, the PBSC collections from autologous patients and allogeneic donors had got a high degree

  9. Development of the thermal behavior analysis code DIRAD and the fuel design procedure for LMFBR

    NASA Astrophysics Data System (ADS)

    Nakae, N.; Tanaka, K.; Nakajima, H.; Matsumoto, M.

    1992-06-01

    It is very important to increase the fuel linear heat rating for improvement of economy in LMFBR without any degradation in safety. A reduction of the design margin is helpful to achieve the high power operation. The development of a fuel design code and a design procedure is effective on the reduction of the design margin. The thermal behavior analysis code DIRAD has been developed with respect to fuel restructuring and gap conductance models. These models have been calibrated and revised using irradiation data of fresh fuel. It is, therefore, found that the code is applicable for the thermal analysis with fresh fuel. The uncertainties in fuel irradiation condition and fuel fabrication tolerance together with the uncertainty of the code prediction have major contributions to the design margin. In the current fuel design the first two uncertainties independently contribute to temperature increment. Another method which can rationally explain the effect of the uncertainties on the temperature increment is adopted here. Then, the design margin may be rationally reduced.

  10. Multifractal detrended fluctuation analysis of optogenetic modulation of neural activity

    NASA Astrophysics Data System (ADS)

    Kumar, S.; Gu, L.; Ghosh, N.; Mohanty, S. K.

    2013-02-01

    Here, we introduce a computational procedure to examine whether optogenetically activated neuronal firing recordings could be characterized as multifractal series. Optogenetics is emerging as a valuable experimental tool and a promising approach for studying a variety of neurological disorders in animal models. The spiking patterns from cortical region of the brain of optogenetically-stimulated transgenic mice were analyzed using a sophisticated fluctuation analysis method known as multifractal detrended fluctuation analysis (MFDFA). We observed that the optogenetically-stimulated neural firings are consistent with a multifractal process. Further, we used MFDFA to monitor the effect of chemically induced pain (formalin injection) and optogenetic treatment used to relieve the pain. In this case, dramatic changes in parameters characterizing a multifractal series were observed. Both the generalized Hurst exponent and width of singularity spectrum effectively differentiates the neural activities during control and pain induction phases. The quantitative nature of the analysis equips us with better measures to quantify pain. Further, it provided a measure for effectiveness of the optogenetic stimulation in inhibiting pain. MFDFA-analysis of spiking data from other deep regions of the brain also turned out to be multifractal in nature, with subtle differences in the parameters during pain-induction by formalin injection and inhibition by optogenetic stimulation. Characterization of neuronal firing patterns using MFDFA will lead to better understanding of neuronal response to optogenetic activation and overall circuitry involved in the process.

  11. Human hair neutron activation analysis: Analysis on population level, mapping

    NASA Astrophysics Data System (ADS)

    Zhuk, L. I.; Kist, A. A.

    1999-01-01

    Neutron activation analysis is an outstanding analytical method having very wide applications in various fields. Analysis of human hair within last decades mostly based on neutron activation analysis is a very attractive illustration of the application of nuclear analytical techniques. Very interesting question is how the elemental composition differs in different areas or cities. In this connection the present paper gives average data and maps of various localities in the vicinity of drying-out Aral Sea and of various industrial cities in Central Asia.

  12. Analysis of the cold-water restraint procedure in gastric ulceration and body temperature.

    PubMed

    Landeira-Fernandez, J

    2004-10-15

    Gastric mucosal injury induced by body restraint can be enhanced when combined with cold-water immersion. Based on this fact, the present study had two main purposes: (i) to examine the contribution of each of these two forms of stress on the development of gastric ulceration and regulation of body temperature and (ii) to investigate the importance of the animal's consciousness on gastric ulceration induced by the cold-water restraint. Independent groups of animals were exposed for 3 h to one of the following stressful treatments: body restraint plus cold-water (20+1 degrees C) immersion, body restraint alone or cold-water immersion alone. Control animals were not exposed to any form of stress. Half of the animals submitted to each of the four treatments were anesthetized with thionembutal (35 mg/kg), whereas the other half was injected with saline. Results indicated that body restraint alone was not sufficient to induce gastric ulceration or changes in body temperature. On the other hand, cold-water exposure, either alone or in conjunction with body restraint, induced the same amount of stomach erosions and hypothermia. Therefore, it appears that body restraint does not play an important role on gastric ulceration induced by the cold-water restraint procedure. Present results also indicated that conscious and anesthetized animals immersed in cold water presented robust gastric ulceration and a marked drop in body temperature. However, conscious animals developed more severe gastric damage in comparison to anesthetized animals although both groups presented the same degree of hypothermia. These findings suggest that hypothermia resulting from cold-water exposure has a deleterious effect on gastric ulceration but the animal's conscious activity during the cold-water immersion increases the severity of gastric mucosal damage. It is concluded that cold-water restraint is a useful procedure for the study of the underlying mechanisms involved in stress

  13. Function Allocation in Complex Socio-Technical Systems: Procedure usage in nuclear power and the Context Analysis Method for Identifying Design Solutions (CAMIDS) Model

    NASA Astrophysics Data System (ADS)

    Schmitt, Kara Anne

    This research aims to prove that strict adherence to procedures and rigid compliance to process in the US Nuclear Industry may not prevent incidents or increase safety. According to the Institute of Nuclear Power Operations, the nuclear power industry has seen a recent rise in events, and this research claims that a contributing factor to this rise is organizational, cultural, and based on peoples overreliance on procedures and policy. Understanding the proper balance of function allocation, automation and human decision-making is imperative to creating a nuclear power plant that is safe, efficient, and reliable. This research claims that new generations of operators are less engaged and thinking because they have been instructed to follow procedures to a fault. According to operators, they were once to know the plant and its interrelations, but organizationally more importance is now put on following procedure and policy. Literature reviews were performed, experts were questioned, and a model for context analysis was developed. The Context Analysis Method for Identifying Design Solutions (CAMIDS) Model was created, verified and validated through both peer review and application in real world scenarios in active nuclear power plant simulators. These experiments supported the claim that strict adherence and rigid compliance to procedures may not increase safety by studying the industry's propensity for following incorrect procedures, and when it directly affects the outcome of safety or security of the plant. The findings of this research indicate that the younger generations of operators rely highly on procedures, and the organizational pressures of required compliance to procedures may lead to incidents within the plant because operators feel pressured into following the rules and policy above performing the correct actions in a timely manner. The findings support computer based procedures, efficient alarm systems, and skill of the craft matrices. The solution to

  14. Determination of sodium in biological materials by instrumental neutron activation analysis.

    PubMed

    Cunningham, W C; Capar, S G; Anderson, D L

    1997-01-01

    A formalized method for determining sodium in biological materials by instrumental neutron activation analysis is presented. The method includes common procedures from the numerous options available to this historically nonformalized analytical technique. The number of procedural options is restricted to minimize the method's complexity, yet the method is still applicable to a variety of neutron activation facilities. High accuracy and precision are achieved by placing bounds on allowed uncertainty at critical stages of the analysis. Analytical results from the U.S. Food and Drug Administration laboratory and 4 other laboratories demonstrate the method's performance.

  15. [Ocra Method: development of a new procedure for analysis of multiple tasks subject to infrequent rotation].

    PubMed

    Occhipinti, E; Colombini, Daniela; Occhipinti, M

    2008-01-01

    In the Ocra methods (Ocra index and Ocra Checklist), when computing the final indices (Ocra index or checklist score), in the case of more than one repetitive task a "traditional" procedure was already proposed, the results of which could be defined as "time-weighted average". This approach appears to be appropriate when considering rotations among tasks that are performed very frequently, for instance almost once every hour (or for shorter periods). However, when rotation among repetitive tasks is less frequent (i.e. once every 1 1/2 or more hours), the "time-weighted average" approach could result in an underestimation of the exposure level (as it practically flattens peaks of high exposures). For those scenarios an alternative approach based on the "most stressful task as minimum" might be more realistic. This latter approach has already been included in the NIOSH approach for multiple sequential lifting tasks and, given the recent availability in the Ocra method of more detailed duration multipliers (practically one different Du(M) for each different step of one hour of duration of the repetitive task), it is now possible to define a particular procedure to compute the complex Ocra Multitask Index (cOCRA) and the complex Checklist Score (cCHESCO) for the analysis of two or more repetitive tasks when rotations are infrequent (rotations every 1 1/2 hours or more). The result of this approach will be at least equal to the index of the most stressful task considered for its individual daily duration and at the most equal to the index of the most stressful task when it is (only theoretically) considered as lasting for the overall daily duration of all examined repetitive tasks. The procedure is based on the following formula: Complex Ocra Multitask Index = Ocra(1(Dum1) + (Delta ocra1xK) where 1,2,3,...,N = repetitive tasks ordered by ocra index values (1 = highest; N = lowest) computed considering respective real duration multipliers (Dum(i)). ocra1 = ocra index of

  16. Sensitivity analysis of standardization procedures in drought indices to varied input data selections

    NASA Astrophysics Data System (ADS)

    Liu, Yi; Ren, Liliang; Hong, Yang; Zhu, Ye; Yang, Xiaoli; Yuan, Fei; Jiang, Shanhu

    2016-07-01

    Reasonable input data selection is of great significance for accurate computation of drought indices. In this study, a comprehensive comparison is conducted on the sensitivity of two commonly used standardization procedures (SP) in drought indices to datasets, namely the probability distribution based SP and the self-calibrating Palmer SP. The standardized Palmer drought index (SPDI) and the self-calibrating Palmer drought severity index (SC-PDSI) are selected as representatives of the two SPs, respectively. Using meteorological observations (1961-2012) in the Yellow River basin, 23 sub-datasets with a length of 30 years are firstly generated with the moving window method. Then we use the whole time series and 23 sub-datasets to compute two indices separately, and compare their spatiotemporal differences, as well as performances in capturing drought areas. Finally, a systematic investigation in term of changing climatic conditions and varied parameters in each SP is conducted. Results show that SPDI is less sensitive to data selection than SC-PDSI. SPDI series derived from different datasets are highly correlated, and consistent in drought area characterization. Sensitivity analysis shows that among the three parameters in the generalized extreme value (GEV) distribution, SPDI is most sensitive to changes in the scale parameter, followed by location and shape parameters. For SC-PDSI, its inconsistent behaviors among different datasets are primarily induced by the self-calibrated duration factors (p and q). In addition, it is found that the introduction of the self-calibrating procedure for duration factors further aggravates the dependence of drought index on input datasets compared with original empirical algorithm that Palmer uses, making SC-PDSI more sensitive to variations in data sample. This study clearly demonstrate the impacts of dataset selection on sensitivity of drought index computation, which has significant implications for proper usage of drought

  17. Radioactivity analysis in niobium activation foils

    SciTech Connect

    Mueller, G.E.

    1995-06-01

    The motivation for this study was to measure and analyze the activity of six (6) niobium (Nb) foils (the x-rays from an internal transition in Nb-93m) and apply this information with previously obtained activation foil data. The niobium data was used to determine the epithermal to MeV range for the neutron spectrum and fluence. The foil activation data was re-evaluated in a spectrum analysis code (STAY`SL) to provide new estimates of the exposure at the Los Alamos Spallation Radiation Effect Facility (LASREF). The activity of the niobium foils was measured and analyzed at the University of Missouri-Columbia (UMC) under the direction of Professor William Miller. The spectrum analysis was performed at the University of Missouri-Rolla (UMR) by Professor Gary Mueller.

  18. IFCC primary reference procedures for the measurement of catalytic activity concentrations of enzymes at 37 degrees C. International Federation of Clinical Chemistry and Laboratory Medicine. Part 4. Reference procedure for the measurement of catalytic concentration of alanine aminotransferase.

    PubMed

    Schumann, Gerhard; Bonora, Roberto; Ceriotti, Ferruccio; Férard, Georges; Ferrero, Carlo A; Franck, Paul F H; Gella, F Javier; Hoelzel, Wieland; Jørgensen, Poul Jørgen; Kanno, Takashi; Kessner, Art; Klauke, Rainer; Kristiansen, Nina; Lessinger, Jean-Marc; Linsinger, Thomas P J; Misaki, Hideo; Panteghini, Mauro; Pauwels, Jean; Schiele, Françoise; Schimmel, Heinz G; Weidemann, Gerhard; Siekmann, Lothar

    2002-07-01

    This paper is the fourth in a series dealing with reference procedures for the measurement of catalytic activity concentrations of enzymes at 37 degrees C and the certification of reference preparations. Other parts deal with: Part 1. The Concept of Reference Procedures for the Measurement of Catalytic Activity Concentrations of Enzymes; Part 2. Reference Procedure for the Measurement of Catalytic Concentration of Creatine Kinase; Part 3. Reference Procedure for the Measurement of Catalytic Concentration of Lactate Dehydrogenase; Part 5. Reference Procedure for the Measurement of Catalytic Concentration of Aspartate Aminotransferase; Part 6. Reference Procedure for the Measurement of Catalytic Concentration of Gamma-Glutamyltransferase; Part 7. Certification of Four Reference Materials for the Determination of Enzymatic Activity of Gamma-Glutamyltransferase, Lactate Dehydrogenase, Alanine Aminotransferase and Creatine Kinase at 37 degrees C. A document describing the determination of preliminary upper reference limits is also in preparation. The procedure described here is deduced from the previously described 30 degrees C IFCC reference method. Differences are tabulated and commented on in Appendix 2.

  19. IFCC primary reference procedures for the measurement of catalytic activity concentrations of enzymes at 37 degrees C. International Federation of Clinical Chemistry and Laboratory Medicine. Part 6. Reference procedure for the measurement of catalytic concentration of gamma-glutamyltransferase.

    PubMed

    Schumann, Gerhard; Bonora, Roberto; Ceriotti, Ferruccio; Férard, Georges; Ferrero, Carlo A; Franck, Paul F H; Gella, F Javier; Hoelzel, Wieland; Jørgensen, Poul Jørgen; Kanno, Takashi; Kessner, Art; Klauke, Rainer; Kristiansen, Nina; Lessinger, Jean-Marc; Linsinger, Thomas P J; Misaki, Hideo; Panteghini, Mauro; Pauwels, Jean; Schiele, Françoise; Schimmel, Heinz G; Weidemann, Gerhard; Siekmann, Lothar

    2002-07-01

    This paper is the sixth in a series dealing with reference procedures for the measurement of catalytic activity concentrations of enzymes at 37 degrees C and the certification of reference preparations. Other parts deal with: Part 1. The Concept of Reference Procedures for the Measurement of Catalytic Activity Concentrations of Enzymes; Part 2. Reference Procedure for the Measurement of Catalytic Concentration of Creatine Kinase; Part 3. Reference Procedure for the Measurement of Catalytic Concentration of Lactate Dehydrogenase; Part 4. Reference Procedure for the Measurement of Catalytic Concentration of Alanine Aminotransferase; Part 5. Reference Procedure for the Measurement of Catalytic Concentration of Aspartate Aminotransferase; Part 7. Certification of Four Reference Materials for the Determination of Enzymatic Activity of Gamma-Glutamyltransferase, Lactate Dehydrogenase, Alanine Aminotransferase and Creatine Kinase at 37 degrees C A document describing the determination of preliminary upper reference limits is also in preparation. The procedure described here is deduced from the previously described 30 degrees C IFCC reference method. Differences are tabulated and commented on in Appendix 1.

  20. The relationship between ram sperm head morphometry and fertility depends on the procedures of acquisition and analysis used.

    PubMed

    de Paz, Paulino; Mata-Campuzano, María; Tizado, E Jorge; Alvarez, Mercedes; Alvarez-Rodríguez, Manuel; Herraez, Paz; Anel, Luis

    2011-10-15

    Sperm head morphometry is a parameter in the evaluation of semen that has been associated with fertility in two ways: comparing morphometric measures between predefined groups of fertility; or analyzing morphometric data by multivariate techniques to identify cell populations. We analyzed the morphometry of ram sperm head by three procedures and checked its relationship with male fertility. A Computer-Aided Sperm Morphometric Assessment procedure (CASMA), an image analysis software (NIS-Elements) in combination with an optical microscope (MO-NIS) and this image analysis software in combination with a scanning electron microscope (SEM-NIS) were used. Eight morphometric parameters were assessed: length, width, area, perimeter, ellipticity, form factor, elongation and regularity. We observed significant differences between the morphometric data of sperm head obtained with three study procedures. The CASMA procedure shows the highest values for all parameters and the SEM-NIS procedure the lowest. The analysis of a semen sample, when only the mean of morphometric parameters is used to describe the cell population, is too limited to interpret their fertilizing capacity. It is essential to analyze the complex structure of the samples by defining subpopulations by multivariate methods. With few exceptions, the means of each morphometric parameter differ between the three subpopulations analyzed in each procedure. Only the subpopulations obtained with the MO-NIS procedure showed a significant correlation with male fertility. In short, it is necessary to establish an instrumental standard for the analysis of sperm morphometry to obtain reliable results and we believe that the MO-NIS system presents these basic requirements. PMID:21798583

  1. Analysis of a guided-response procedure in visual discriminations by rats.

    PubMed Central

    Aronsohn, S; Pinto-Hamuy, T; Toledo, P; Asenjo, P

    1987-01-01

    A guided-response procedure was used to train a visual pattern discrimination by rats in a modified Sutherland box. The method consisted of guiding the animal to the correct choice by means of a retractable bridge that led to reinforcers, followed by gradually removing this prompt. This method was compared to a stimulus-fading procedure, in which the initial differences between discriminative stimuli were gradually faded until they differed only with respect to the critical dimension for discrimination, and to a trial-and-error procedure. Both gradual procedures resulted in fewer errors compared to the trial-and-error procedure. The higher efficiency of the fading procedures was attributed to less aversiveness derived from performance with few errors and to the use of step-by-step requirements relative to the criterion performance. PMID:3612020

  2. Comparison of metabolic and biomechanic responses to active vs. passive warm-up procedures before physical exercise.

    PubMed

    Brunner-Ziegler, Sophie; Strasser, Barbara; Haber, Paul

    2011-04-01

    Active warm-up before physical exercise is a widely accepted practice to enhance physical performance, whereas data on modalities to passively raise tissue temperature are rare. The study compared the effect of active vs. passive warm-up procedures before exercise on energy supply and muscle strength performance. Twenty young, male volunteers performed 3 spiroergometer-test series without prior warm-up and after either an active or passive warm-up procedure. Oxygen uptake (VO2), heart rate (HR), pH value, and lactate were determined at 80% of individual VO2max values and during recovery. Comparing no prior warm-up with passive warm-up, pH values were lower at the fourth test minute (p < 0.004), and lactate values were higher at the sixth and third minutes of recovery (p < 0.01 and p < 0.010, respectively), after no prior warm-up. Comparing active with passive warm-up, HR was lower, and VO2 values were higher at the fourth and sixth test minutes (p < 0.033 and p < 0.011, respectively, and p < 0.015 and p < 0.022, respectively) after active warm-up. Differentiation between active and passive warm-up was more pronounced than between either warm-up or no warm-up. Conditions that may promote improved performance were more present after active vs. passive warm-up. Thus, athletes may reach the metabolic steady state faster after active warm-up. PMID:20733525

  3. Aeroelastic Analysis of the NASA/ARMY/MIT Active Twist Rotor

    NASA Technical Reports Server (NTRS)

    Wilkie, W. Keats; Wilbur, Matthew L.; Mirick, Paul H.; Cesnik, Carlos E. S.; Shin, Sangloon

    1999-01-01

    Aeroelastic modeling procedures used in the design of a piezoelectric controllable twist helicopter rotor wind tunnel model are described. Two aeroelastic analysis methods developed for active twist rotor studies, and used in the design of the model blade, are described in this paper. The first procedure uses a simple flap-torsion dynamic representation of the active twist blade, and is intended for rapid and efficient control law and design optimization studies. The second technique employs a commercially available comprehensive rotor analysis package, and is used for more detailed analytical studies. Analytical predictions of hovering flight twist actuation frequency responses are presented for both techniques. Forward flight fixed system nP vibration suppression capabilities of the model active twist rotor system are also presented. Frequency responses predicted using both analytical procedures agree qualitatively for all design cases considered, with best correlation for cases where uniform blade properties are assumed.

  4. Affection Activities: Procedures for Encouraging Young Children with Handicaps to Interact with Their Peers.

    ERIC Educational Resources Information Center

    McEvoy, Mary A.; And Others

    1990-01-01

    Affection activities (such as hugging, smiling, and saying positive things) can be added to typical preschool games and songs to encourage interaction between handicapped children and nonhandicapped peers. The intervention can be adapted for use with children with diverse handicapping conditions. Typical activities, modified directions for…

  5. 15 CFR 400.37 - Procedure for notification of proposed production activity.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... production activity. 400.37 Section 400.37 Commerce and Foreign Trade Regulations Relating to Commerce and... of proposed production activity. (a) Submission of notification. A notification for production... Secretary and to CBP (as well as to the grantee of the zone, if the grantee is not the party making...

  6. 15 CFR 400.37 - Procedure for notification of proposed production activity.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... production activity. 400.37 Section 400.37 Commerce and Foreign Trade Regulations Relating to Commerce and... of proposed production activity. (a) Submission of notification. A notification for production... Secretary and to CBP (as well as to the grantee of the zone, if the grantee is not the party making...

  7. The effects of activation procedures on regional cerebral blood flow in humans

    SciTech Connect

    Rozenfeld, D.; Wolfson, L.I.

    1981-07-01

    Regional cerebral blood flow (r-CBF) can be measured using 133XE and collimated detectors. The radionuclide can be administered either by inhalation or intracarotid injection. Comparison of blood flow determinations at rest and during performance of an activity identifies those brain regions that become active during the performance of the activity. Relatively specific patterns of r-CBF are observed during hand movements, sensory stimulation, eye movements, speech, listening, and reading. Regional CBF changes during reasoning and memorization are less specific and less well characterized. It is clear that brain lesions affect r-CBF responses to various activities, but this effect has not been well correlated with functional deficits or recovery of function. Regional CBF measurement gives information about brain activity and the functional response to experimental manipulation. This approach may well add to our understanding of normal, as well as pathologic, brain functioning.

  8. A targeted change-detection procedure by combining change vector analysis and post-classification approach

    NASA Astrophysics Data System (ADS)

    Ye, Su; Chen, Dongmei; Yu, Jie

    2016-04-01

    In remote sensing, conventional supervised change-detection methods usually require effective training data for multiple change types. This paper introduces a more flexible and efficient procedure that seeks to identify only the changes that users are interested in, here after referred to as "targeted change detection". Based on a one-class classifier "Support Vector Domain Description (SVDD)", a novel algorithm named "Three-layer SVDD Fusion (TLSF)" is developed specially for targeted change detection. The proposed algorithm combines one-class classification generated from change vector maps, as well as before- and after-change images in order to get a more reliable detecting result. In addition, this paper introduces a detailed workflow for implementing this algorithm. This workflow has been applied to two case studies with different practical monitoring objectives: urban expansion and forest fire assessment. The experiment results of these two case studies show that the overall accuracy of our proposed algorithm is superior (Kappa statistics are 86.3% and 87.8% for Case 1 and 2, respectively), compared to applying SVDD to change vector analysis and post-classification comparison.

  9. Modular approach to customise sample preparation procedures for viral metagenomics: a reproducible protocol for virome analysis

    PubMed Central

    Conceição-Neto, Nádia; Zeller, Mark; Lefrère, Hanne; De Bruyn, Pieter; Beller, Leen; Deboutte, Ward; Yinda, Claude Kwe; Lavigne, Rob; Maes, Piet; Ranst, Marc Van; Heylen, Elisabeth; Matthijnssens, Jelle

    2015-01-01

    A major limitation for better understanding the role of the human gut virome in health and disease is the lack of validated methods that allow high throughput virome analysis. To overcome this, we evaluated the quantitative effect of homogenisation, centrifugation, filtration, chloroform treatment and random amplification on a mock-virome (containing nine highly diverse viruses) and a bacterial mock-community (containing four faecal bacterial species) using quantitative PCR and next-generation sequencing. This resulted in an optimised protocol that was able to recover all viruses present in the mock-virome and strongly alters the ratio of viral versus bacterial and 16S rRNA genetic material in favour of viruses (from 43.2% to 96.7% viral reads and from 47.6% to 0.19% bacterial reads). Furthermore, our study indicated that most of the currently used virome protocols, using small filter pores and/or stringent centrifugation conditions may have largely overlooked large viruses present in viromes. We propose NetoVIR (Novel enrichment technique of VIRomes), which allows for a fast, reproducible and high throughput sample preparation for viral metagenomics studies, introducing minimal bias. This procedure is optimised mainly for faecal samples, but with appropriate concentration steps can also be used for other sample types with lower initial viral loads. PMID:26559140

  10. Assessing short summaries with human judgments procedure and latent semantic analysis in narrative and expository texts.

    PubMed

    León, José A; Olmos, Ricardo; Escudero, Inmaculada; Cañas, José J; Salmerón, Lalo

    2006-11-01

    In the present study, we tested a computer-based procedure for assessing very concise summaries (50 words long) of two types of text (narrative and expository) using latent semantic analysis (LSA) in comparison with the judgments of four human experts. LSA was used to estimate semantic similarity using six different methods: four holistic (summary-text, summary-summaries, summary-expert summaries, and pregraded-ungraded summary) and two componential (summary-sentence text and summary-main sentence text). A total of 390 Spanish middle and high school students (14-16 years old) and six experts read a narrative or expository text and later summarized it. The results support the viability of developing a computerized assessment tool using human judgments and LSA, although the correlation between human judgments and LSA was higher in the narrative text than in the expository, and LSA correlated more with human content ratings thanwith hu mancoherence ratings. Finally, theholistic methods were found to be more reliable than the componential methods analyzed in this study.

  11. CT scan-based finite element analysis of premolar cuspal deflection following operative procedures.

    PubMed

    Magne, Pascal; Oganesyan, Tevan

    2009-08-01

    The objective of this investigation was to present a novel method to facilitate and accelerate geometry acquisition/modification during the fabrication of finite element models of tooth restorations. Microcomputed tomographic data, stereolithography, and surface-driven automatic meshing were used to generate premolar finite element models with different occlusoproximal cavity preparations and corresponding composite resin restorations. Occlusal loading was simulated by nonlinear contact analysis. Cuspal widening was measured and correlated with existing experimental data for model validation. Cuspal widening during application of a 150-N load ranged from 2.7 Mum for the unrestored tooth to 5 to 179 Mum for the different preparations and 3.5 to 6.9 Mum for the different restorations. The described method was efficient and generated detailed and valid three-dimensional finite element models. These models were used to study the effect of restorative procedures on cuspal deflection and revealed high cuspal strains associated with mesio-occlusodistal preparations and restorations compared to individual two-surface preparations. This study confirmed that, whenever possible during removal of interdental decay, an intact marginal ridge should be maintained to avoid three-surface preparations such as the mesio-occlusodistal and the high cuspal strain associated with this design. PMID:19639057

  12. Analysis of the slider force calibration procedure for the British Pendulum Skid Resistance Tester

    NASA Astrophysics Data System (ADS)

    Hiti, Miha; Ducman, Vilma

    2014-02-01

    British Pendulum Skid Resistance Testers are being used for the determination of the slip/skid resistance of surfaces by laboratories all around the world in different fields. The instrument itself can give reproducible results; however, the comparison of results obtained by different instruments can show large deviations. This paper presents a comparison of the performance of four pendulum testers, the investigation of requirements in international standards and the analysis of the calibration procedure for the determination of the slider force/deflection characteristics. The slider force/deflection characteristics were measured manually and also automatically with a uniaxial tensile/compression testing machine using different techniques. The results highlight the importance of the slider force/deflection characteristic shape and its influence on the results indicated by the pendulum tester and outline inconsistencies in different international standards describing the same device and its requirements. Presented results show good reproducibility and comparability of the pendulum test results when calibration is performed with the assembled pendulum either manually or automatically, provided the stricter slider force characteristic envelope requirements are taken into consideration. The actual slider force should be stable from 1.5 mm deflection onwards.

  13. Modular approach to customise sample preparation procedures for viral metagenomics: a reproducible protocol for virome analysis.

    PubMed

    Conceição-Neto, Nádia; Zeller, Mark; Lefrère, Hanne; De Bruyn, Pieter; Beller, Leen; Deboutte, Ward; Yinda, Claude Kwe; Lavigne, Rob; Maes, Piet; Van Ranst, Marc; Heylen, Elisabeth; Matthijnssens, Jelle

    2015-11-12

    A major limitation for better understanding the role of the human gut virome in health and disease is the lack of validated methods that allow high throughput virome analysis. To overcome this, we evaluated the quantitative effect of homogenisation, centrifugation, filtration, chloroform treatment and random amplification on a mock-virome (containing nine highly diverse viruses) and a bacterial mock-community (containing four faecal bacterial species) using quantitative PCR and next-generation sequencing. This resulted in an optimised protocol that was able to recover all viruses present in the mock-virome and strongly alters the ratio of viral versus bacterial and 16S rRNA genetic material in favour of viruses (from 43.2% to 96.7% viral reads and from 47.6% to 0.19% bacterial reads). Furthermore, our study indicated that most of the currently used virome protocols, using small filter pores and/or stringent centrifugation conditions may have largely overlooked large viruses present in viromes. We propose NetoVIR (Novel enrichment technique of VIRomes), which allows for a fast, reproducible and high throughput sample preparation for viral metagenomics studies, introducing minimal bias. This procedure is optimised mainly for faecal samples, but with appropriate concentration steps can also be used for other sample types with lower initial viral loads.

  14. Adaptive kernel independent component analysis and UV spectrometry applied to characterize the procedure for processing prepared rhubarb roots.

    PubMed

    Wang, Guoqing; Hou, Zhenyu; Peng, Yang; Wang, Yanjun; Sun, Xiaoli; Sun, Yu-an

    2011-11-01

    By determination of the number of absorptive chemical components (ACCs) in mixtures using median absolute deviation (MAD) analysis and extraction of spectral profiles of ACCs using kernel independent component analysis (KICA), an adaptive KICA (AKICA) algorithm was proposed. The proposed AKICA algorithm was used to characterize the procedure for processing prepared rhubarb roots by resolution of the measured mixed raw UV spectra of the rhubarb samples that were collected at different steaming intervals. The results show that the spectral features of ACCs in the mixtures can be directly estimated without chemical and physical pre-separation and other prior information. The estimated three independent components (ICs) represent different chemical components in the mixtures, which are mainly polysaccharides (IC1), tannin (IC2), and anthraquinone glycosides (IC3). The variations of the relative concentrations of the ICs can account for the chemical and physical changes during the processing procedure: IC1 increases significantly before the first 5 h, and is nearly invariant after 6 h; IC2 has no significant changes or is slightly decreased during the processing procedure; IC3 decreases significantly before the first 5 h and decreases slightly after 6 h. The changes of IC1 can explain why the colour became black and darkened during the processing procedure, and the changes of IC3 can explain why the processing procedure can reduce the bitter and dry taste of the rhubarb roots. The endpoint of the processing procedure can be determined as 5-6 h, when the increasing or decreasing trends of the estimated ICs are insignificant. The AKICA-UV method provides an alternative approach for the characterization of the processing procedure of rhubarb roots preparation, and provides a novel way for determination of the endpoint of the traditional Chinese medicine (TCM) processing procedure by inspection of the change trends of the ICs.

  15. Transforming Teacher Education, An Activity Theory Analysis

    ERIC Educational Resources Information Center

    McNicholl, Jane; Blake, Allan

    2013-01-01

    This paper explores the work of teacher education in England and Scotland. It seeks to locate this work within conflicting sociocultural views of professional practice and academic work. Drawing on an activity theory framework that integrates the analysis of these seemingly contradictory discourses with a study of teacher educators' practical…

  16. Computer-automated neutron activation analysis system

    SciTech Connect

    Minor, M.M.; Garcia, S.R.

    1983-01-01

    An automated delayed neutron counting and instrumental neutron activation analysis system has been developed at Los Alamos National Laboratory's Omega West Reactor (OWR) to analyze samples for uranium and 31 additional elements with a maximum throughput of 400 samples per day. 5 references.

  17. Destruction-free procedure for the isolation of bacteria from sputum samples for Raman spectroscopic analysis.

    PubMed

    Kloß, Sandra; Lorenz, Björn; Dees, Stefan; Labugger, Ines; Rösch, Petra; Popp, Jürgen

    2015-11-01

    Lower respiratory tract infections are the fourth leading cause of death worldwide. Here, a timely identification of the causing pathogens is crucial to the success of the treatment. Raman spectroscopy allows for quick identification of bacterial cells without the need for time-consuming cultivation steps, which is the current gold standard to detect pathogens. However, before Raman spectroscopy can be used to identify pathogens, they have to be isolated from the sample matrix, i.e., sputum in case of lower respiratory tract infections. In this study, we report an isolation protocol for single bacterial cells from sputum samples for Raman spectroscopic identification. Prior to the isolation, a liquefaction step using the proteolytic enzyme mixture Pronase E is required in order to deal with the high viscosity of sputum. The extraction of the bacteria was subsequently performed via different filtration and centrifugation steps, whereby isolation ratios between 46 and 57 % were achieved for sputa spiked with 6·10(7) to 6·10(4) CFU/mL of Staphylococcus aureus. The compatibility of such a liquefaction and isolation procedure towards a Raman spectroscopic classification was shown for five different model species, namely S. aureus, Staphylococcus epidermidis, Streptococcus pneumoniae, Klebsiella pneumoniae, and Pseudomonas aeruginosa. A classification of single-cell Raman spectra of these five species with an accuracy of 98.5 % could be achieved on the basis of a principal component analysis (PCA) followed by a linear discriminant analysis (LDA). These classification results could be validated with an independent test dataset, where 97.4 % of all spectra were identified correctly. Graphical Abstract Development of an isolation protocol of bacterial cells out of sputum samples followed by Raman spectroscopic measurement and species identification using chemometrical models. PMID:26041453

  18. Evaluation of integration procedures for PNA analysis by C-13 NMR

    SciTech Connect

    Yamashita, G.T.; Saetre, R.; Somogyvari, A. )

    1989-04-01

    Carbon-13 NMR spectroscopy has become a useful tool in the analysis of crude oils and petroleum products. The determination of the relative percentages of paraffinic, naphthenic and aromatic carbon in crude oils and oil fractions by carbon-13 nuclear magnetic resonance spectroscopy has been shown to compare well with results from ASTM D2140 procedures. In the ASTM method, the PNA analysis is determined by the refractive index, the density and the molecular weight of the material. In carbon-13 NMR spectroscopy, the carbon types are observed directly and, under appropriate conditions, can be readily quantified. Aromatic carbon resonances are observed in the 110-170 ppm region of the 25-60 ppm region of the spectrum. The paraffinic carbon signals are seen as a band of overlapping resonances in the 0-25 ppm region and as sharp resonances in the 25-60 ppm region. The relative percentages of these three carbon types is calculated by the ratio of the area of the resonances for each carbon type over the total area, excluding the solvent and reference peak areas. To study the effects of the baseline subtraction and the baseline correction subroutines on the PNA results, a typical crude oil and its asphaltene fraction were selected as model compounds. Generally, asphaltene samples display poor sensitivity when analyzed by carbon-13 NMR, so this fraction was chosen to examine the abilities of the two subroutines under non-ideal conditions. As well, the effect of line broadening factors on the PNA results was studied. Line broadening will increase the signal to noise ratio, but also leads to a decrease in resolution.

  19. Effects of different reconstitution procedures on membrane protein activities in proteopolymersomes

    NASA Astrophysics Data System (ADS)

    Choi, Hyo-Jick; Germain, Jeffrey; Montemagno, Carlo D.

    2006-04-01

    The development of membrane protein reconstitution methods in polymersomes is regarded as a major challenge in replicating cellular functions in engineered cellular mimetic systems. We present a solvent-free membrane protein reconstitution method which can be used in polymersomes. To test our method, we reconstructed in vitro proton-powered ATP synthesis using engineered artificial organelles (BR/F0F1-ATP synthase reconstituted proteopolymersomes). We compared the functionality of BR and ATP synthase between two preparation methods. From the difference in the direction of proton pumping and the ATP production profile, it is evident that the relative orientation of BR can be determined by the condition of the proteins (BR monomer, purple membrane), together with the incorporation techniques. As the new procedure eliminates the problem of protein denaturation during incorporation, this research is expected to enhance the potential applications of synthetic membranes in the future fabrication of hybrid protein/polymer systems.

  20. A New Variable Weighting and Selection Procedure for K-Means Cluster Analysis

    ERIC Educational Resources Information Center

    Steinley, Douglas; Brusco, Michael J.

    2008-01-01

    A variance-to-range ratio variable weighting procedure is proposed. We show how this weighting method is theoretically grounded in the inherent variability found in data exhibiting cluster structure. In addition, a variable selection procedure is proposed to operate in conjunction with the variable weighting technique. The performances of these…

  1. Decreasing Inappropriate Vocalizations Using Classwide Group Contingencies and Color Wheel Procedures: A Component Analysis

    ERIC Educational Resources Information Center

    Kirk, Emily R.; Becker, Jennifer A.; Skinner, Christopher H., Fearrington, Jamie Yarbr; McCane-Bowling, Sara J.; Amburn, Christie; Luna, Elisa; Greear, Corinne

    2010-01-01

    Teacher referrals for consultation resulted in two independent teams collecting evidence that allowed for a treatment component evaluation of color wheel (CW) procedures and/or interdependent group-oriented reward (IGOR) procedures on inappropriate vocalizations in one third- and one first-grade classroom. Both studies involved the application of…

  2. A Component Analysis of Toilet-Training Procedures Recommended for Young Children

    ERIC Educational Resources Information Center

    Greer, Brian D.; Neidert, Pamela L.; Dozier, Claudia L.

    2016-01-01

    We evaluated the combined and sequential effects of 3 toilet-training procedures recommended for use with young children: (a) underwear, (b) a dense sit schedule, and (c) differential reinforcement. A total of 20 children participated. Classroom teachers implemented a toilet-training package consisting of all 3 procedures with 6 children. Of the 6…

  3. Detailed analysis of CAMS procedures for phase 3 using ground truth inventories

    NASA Technical Reports Server (NTRS)

    Carnes, J. G.

    1979-01-01

    The results of a study of Procedure 1 as used during LACIE Phase 3 are presented. The study was performed by comparing the Procedure 1 classification results with digitized ground-truth inventories. The proportion estimation accuracy, dot labeling accuracy, and clustering effectiveness are discussed.

  4. Rapid screening and analysis of alpha- and gamma-emitting radionuclides in liquids using a single sample preparation procedure.

    PubMed

    Parsa, Bahman; Henitz, James B; Carter, Jennifer A

    2011-02-01

    A multifaceted radiochemical testing procedure has been developed to analyze a large number of liquid samples and measure a wide range of radionuclides in a short period of time. This method involves a single, unique and fast sample preparation procedure and allows sequential/concurrent determination of analytes with accuracy and precision. The same prepared sample can be selectively analyzed by gross alpha counting, gamma-ray spectroscopy, and alpha spectroscopy. This method is especially attractive in radiological emergency events where analytical data will be needed urgently as a basis for protective action. Given the simplicity and rapidity of the method, it may be suitable for field portable laboratories, which could save time and the cost associated with the transit of samples to a fixed laboratory. A 100 mL aliquot of sample was spiked with ¹³³Ba and ⁵⁹Fe tracers and subjected to a chemical separation procedure using a combined BaSO4 and Fe(OH)3 co-precipitation scheme. Then, the gross alpha-particle activity of the prepared sample was measured with a low-background gas-proportional counter, followed by the analysis of its photon-emitters using a gamma-ray spectroscopy system with high-purity intrinsic Ge detectors. Gamma-ray determination of ¹³³Ba and ⁵⁹Fe tracers was used to assess the chemical recoveries of BaSO4 and Fe(OH)3 fractions, respectively. Selectivity of the radionuclides for co-precipitation with either BaSO4 or Fe(OH)3 components was also investigated. Alpha mass-efficiency curves were derived using ²³⁰Th and ²⁴¹Am standards as alpha-calibration sources. Various mixtures of radionuclides, including ⁵⁴Mn, ⁵⁷Co, ⁶⁰Co, ⁸⁵Sr, ⁸⁸Y, ¹⁰⁹Cd, ¹¹³Sn, ¹³⁷Cs, ¹³⁹Ce, ²⁰³Hg, ²⁰⁹Po, ²²⁶Ra, ²²⁸Ra, ²³⁰Th, ²⁴¹Am, and natural uranium were used in this study. Most were quantitatively assayed with high chemical recoveries. Alpha-isotope identification and assessment of the prepared

  5. Influence of the isolation procedure on coriander leaf volatiles with some correlation to the enzymatic activity.

    PubMed

    To Quynh, Cung Thi; Iijima, Yoko; Kubota, Kikue

    2010-01-27

    Coriander leaves (Coriandrum sativum L.) have become popular worldwide because of their pleasant and delicate aroma. By a hot water extraction method, in which coriander leaves were cut before suspending in boiling water for 2 min, the contents of the main volatile compounds such as alkanals and 2-alkenals from C10 to C14 decreased, while the levels of corresponding alcohols increased in comparison to those obtained by solvent extraction. To investigate the reasons for this variation, an enzyme activity was assayed. By using aliphatic aldehyde as a substrate and NADPH as a coenzyme, strong activity of an aliphatic aldehyde reductase was found for the first time in this herb in the relatively wide pH range of 5.0-9.0, with the maximum activity at pH 8.5. Additionally, the aliphatic aldehyde dehydrogenase, responsible for acid formation, was also found to have a relatively weak activity compared to that of reductase.

  6. Vibro-acoustic analysis procedures for the evaluation of the sound insulation characteristics of agricultural machinery cabins

    NASA Astrophysics Data System (ADS)

    Desmet, W.; Pluymers, B.; Sas, P.

    2003-09-01

    Over the last few years, customer demands regarding acoustic performance, along with the tightening of legal regulations on noise emission levels and human exposure to noise, have made the noise and vibration properties into important design criteria for agricultural machinery cabins. In this framework, both experimental analysis procedures for prototype testing as well as reliable numerical prediction tools for early design assessment are compulsory for an efficient optimization of the cabin noise and vibration comfort. This paper discusses several numerical approaches, which are based on the finite element and boundary element method, in terms of their practical use for airborne sound insulation predictions. To illustrate the efficiency and reliability of the various vibro-acoustic analysis procedures, the numerical procedures are applied for the case of a harvester driver's cabin and validated with experimental results.

  7. Finite Element Analysis of the Stability (Buckling and Post-Buckling) of Composite Laminated Structures: Well Established Procedures and Challenges

    NASA Astrophysics Data System (ADS)

    Pietropaoli, Elisa; Riccio, Aniello

    2012-02-01

    Finite element procedures for the analysis of composite structures under compressive loads (buckling and post-buckling) generally are not deployed in books because they are still considered object of research whereas they are deemed as assessed by researchers that focus their papers on restricted audience topics. Therefore, regarding these procedures, a gap exists in literature between what researchers consider as well established and what has been already published. The aim of this paper is to contribute to close this gap by providing an insight in linear and non-linear buckling analyses and in their use with composite structures. Both modelling and analysis considerations are presented and discussed, focusing on what can be considered as best practice when dealing with this kind of problems. Applications (to a stiffened panel and to a wing box) are provided for demonstrating the effectiveness of the procedures presented.

  8. What is the right blood hematocrit preparation procedure for standards and quality control samples for dried blood spot analysis?

    PubMed

    Koster, Remco A; Alffenaar, Jan-Willem C; Botma, Rixt; Greijdanus, Ben; Touw, Daan J; Uges, Donald R A; Kosterink, Jos G W

    2015-01-01

    Remco Koster is a research analyst and PhD candidate at the University Medical Center Groningen and University of Groningen. He has been working in the field of bioanalysis for over 13 years, where he has developed numerous analytical methods using LC-MS/MS. His main research focus is the influence of various matrices on the development and performance of analytical methods using LC-MS/MS. The development of high-speed extraction and analysis methods for drugs and drugs of abuse in human matrices like blood, plasma, hair, saliva and dried blood spots often leads to improved procedures for preparation of standards and quality control samples, sample handling and validation. Two hematocrit preparation procedures for standards and quality control samples were evaluated in order to improve the quality of procedures for dried blood spot validation and analysis.

  9. [Ocra Method: development of a new procedure for analysis of multiple tasks subject to infrequent rotation].

    PubMed

    Occhipinti, E; Colombini, Daniela; Occhipinti, M

    2008-01-01

    In the Ocra methods (Ocra index and Ocra Checklist), when computing the final indices (Ocra index or checklist score), in the case of more than one repetitive task a "traditional" procedure was already proposed, the results of which could be defined as "time-weighted average". This approach appears to be appropriate when considering rotations among tasks that are performed very frequently, for instance almost once every hour (or for shorter periods). However, when rotation among repetitive tasks is less frequent (i.e. once every 1 1/2 or more hours), the "time-weighted average" approach could result in an underestimation of the exposure level (as it practically flattens peaks of high exposures). For those scenarios an alternative approach based on the "most stressful task as minimum" might be more realistic. This latter approach has already been included in the NIOSH approach for multiple sequential lifting tasks and, given the recent availability in the Ocra method of more detailed duration multipliers (practically one different Du(M) for each different step of one hour of duration of the repetitive task), it is now possible to define a particular procedure to compute the complex Ocra Multitask Index (cOCRA) and the complex Checklist Score (cCHESCO) for the analysis of two or more repetitive tasks when rotations are infrequent (rotations every 1 1/2 hours or more). The result of this approach will be at least equal to the index of the most stressful task considered for its individual daily duration and at the most equal to the index of the most stressful task when it is (only theoretically) considered as lasting for the overall daily duration of all examined repetitive tasks. The procedure is based on the following formula: Complex Ocra Multitask Index = Ocra(1(Dum1) + (Delta ocra1xK) where 1,2,3,...,N = repetitive tasks ordered by ocra index values (1 = highest; N = lowest) computed considering respective real duration multipliers (Dum(i)). ocra1 = ocra index of

  10. An integrated multi-scale risk analysis procedure for pluvial flooding

    NASA Astrophysics Data System (ADS)

    Tader, Andreas; Mergili, Martin; Jäger, Stefan; Glade, Thomas; Neuhold, Clemens; Stiefelmeyer, Heinz

    2016-04-01

    Mitigation of or adaptation to the negative impacts of natural processes on society requires a better understanding of the spatio-temporal distribution not only of the processes themselves, but also of the elements at risk. Information on their values, exposures and vulnerabilities towards the expected impact magnitudes/intensities of the relevant processes is needed. GIS-supported methods are particularly useful for integrated spatio-temporal analyses of natural processes and their potential consequences. Hereby, pluvial floods are of particular concern for many parts of Austria. The overall aim of the present study is to calculate the hazards emanating from pluvial floods, to determine the exposure of given elements at risk, to determine their vulnerabilities towards given pluvial flood hazards and to analyze potential consequences in terms of monetary losses. The whole approach builds on data available on a national scale. We introduce an integrated, multi-scale risk analysis procedure with regard to pluvial flooding. Focusing on the risk to buildings, we firstly exemplify this procedure with a well-documented event in the city of Graz (Austria), in order to highlight the associated potentials and limitations. Secondly, we attempt to predict the possible consequences of pluvial flooding triggered by rainfall events with recurrence intervals of 30, 100 and 300 years. (i) We compute spatially distributed inundation depths using the software FloodArea. Infiltration capacity and surface roughness are estimated from the land cover units given by the official cadastre. Various assumptions are tested with regard to the inflow to the urban sewer system. (ii) Based on the inundation depths and the official building register, we employ a set of rules and functions to deduce the exposure, vulnerability and risk for each building. A risk indicator for each building, expressed as the expected damage associated to a given event, is derived by combining the building value and

  11. Comparison of four digestion procedures not requiring perchloric acid for the trace-element analysis of plant material

    SciTech Connect

    Knight, M. J.

    1980-05-01

    Perchloric acid (HClO/sub 4/) is often used to destroy organic material contained in plant tissue during sample preparation for trace-element analysis. However, since perchloric acid is an extremely strong oxidizing agent that can cause fire and explosion when in contact with combustible materials, its use is best avoided when proper safety equipment and training is unavailable. A comparison was made of four digestion procedures that do not require perchloric acid: wet digestion with nitric and sulfuric acids; wet digestion with nitric acid alone; a repeated wet digestion with nitric acid; and direct dry ashing. Each procedure was used to digest National Bureau of Standards orchard leaves (SRM 1571). To investigate the effect of possible filter paper adsorption on the determination of trace elements, digested samples were either filtered or not filtered before analysis. Atomic absorption spectrophotometry was employed to determine concentrations of As, Be, Cd, Cr, Cu, Fe, Mn, Mo, Ni, Pb, Sr, and Zn in each digested sample. Recoveries of each element and the relative error of each determination for each digestion procedure were then calculated. A statistical analysis of these data indicates that the direct dry ashing procedure is best suited for multi-element analysis. Dry ashing is appropriate to recover As, Be, Cr, Cu, Fe, Mn, Mo, Pb, and Zn. The nitric-sulfuric acids, nitric acid, and repeated nitric acid digestion procedures were deemed poor for multi-element analysis; however, each proved useful for the recovery of certain individual elements, including Cd, Pb, and Zn. Sample filtration significantly (p less than or equal to 0.05) lowered the recovery of Cr, Mn, Pb, and Zn from the digested samples. Conversely, the recovery of As, Mo, and Sr was significantly (p less than or equal to 0.05) higher in samples filtered before analysis when compared to the recovery of these elements in unfiltered samples.

  12. RECOMMENDED OPERATING PROCEDURE NO. 2.3: SAMPLING AND ANALYSIS OF TOTAL HYDROCARBONS FROM SOURCES BY CONTINUOUS EMISSION MONITOR

    EPA Science Inventory

    The report is a recommended operating procedure (ROP) prepared for use in research activities conducted by EPA's Air and Energy Engineering Research Laboratory (AEERL). he described method is applicable to the continuous measurement of total hydrocarbons (THCs), also known as tot...

  13. CV 990 interface test and procedure analysis of the monkey restraint, support equipment, and telemetry electronics proposed for Spacelab

    NASA Technical Reports Server (NTRS)

    Newsom, B. D.

    1978-01-01

    A biological system proposed to restrain a monkey in the Spacelab was tested under operational conditions using typical metabolic and telemetered cardiovascular instrumentation. Instrumentation, interfaced with other electronics, and data gathering during a very active operational mission were analyzed for adequacy of procedure and success of data handling by the onboard computer.

  14. Quantitative Content Analysis Procedures to Analyse Students' Reflective Essays: A Methodological Review of Psychometric and Edumetric Aspects

    ERIC Educational Resources Information Center

    Poldner, E.; Simons, P. R. J.; Wijngaards, G.; van der Schaaf, M. F.

    2012-01-01

    Reflective essays are a common way to develop higher education students' reflection ability. Researchers frequently analyse reflective essays based on quantitative content analysis procedures (QCA). However, the quality criteria that should be met in QCA are not straightforward. This article aims to: (1) develop a framework of quality requirements…

  15. 78 FR 32558 - Expedited Approval of Alternative Test Procedures for the Analysis of Contaminants Under the Safe...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-31

    ... earlier expedited methods approval action (74 FR 38348, August 3, 2009) (USEPA 2009b). Both EPA Methods... Water Act; Analysis and Sampling Procedures. 74 FR 38348. August 3, 2009. USEPA. 2013. EPA Method 524.4...). 1997b. Standard Method 2320 B-97. Alkalinity. Titration Method. Approved by Standard Methods...

  16. 76 FR 37014 - Expedited Approval of Alternative Test Procedures for the Analysis of Contaminants Under the Safe...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-24

    ... expedited methods approval action for determining dalapon in drinking water (75 FR 32295, June 8, 2010... the Safe Drinking Water Act; Analysis and Sampling Procedures. 75 FR 32295. June 8, 2010. List of... Alkalinity of Water. Method B--Electrometric or Color-Change Titration. ASTM International, 100 Barr...

  17. Exploratory Bifactor Analysis of the WJ-III Cognitive in Adulthood via the Schmid-Leiman Procedure

    ERIC Educational Resources Information Center

    Dombrowski, Stefan C.

    2014-01-01

    The Woodcock-Johnson-III cognitive in the adult time period (age 20 to 90 plus) was analyzed using exploratory bifactor analysis via the Schmid-Leiman orthogonalization procedure. The results of this study suggested possible overfactoring, a different factor structure from that posited in the Technical Manual and a lack of invariance across both…

  18. ANALYSIS OF TRACE-LEVEL ORGANIC COMBUSTION PROCESS EMISSIONS USING NOVEL MULTIDIMENSIONAL GAS CHROMATOGRAPHY-MASS SPECTROMETRY PROCEDURES

    EPA Science Inventory

    The paper discusses the analysis of trace-level organic combustion process emissions using novel multidimensional gas chromatography-mass spectrometry (MDGC-MS) procedures. It outlines the application of the technique through the analyses of various incinerator effluent and produ...

  19. Investigating the Structure of the WJ-III Cognitive in Early School Age through Two Exploratory Bifactor Analysis Procedures

    ERIC Educational Resources Information Center

    Dombrowski, Stefan C.

    2014-01-01

    Two exploratory bifactor methods (e.g., Schmid-Leiman [SL] and exploratory bifactor analysis [EBFA]) were used to investigate the structure of the Woodcock-Johnson III (WJ-III) Cognitive in early school age (age 6-8). The SL procedure is recognized by factor analysts as a preferred method for EBFA. Jennrich and Bentler recently developed an…

  20. A semi-active milling procedure in view of preparing implantation beds in robot-assisted orthopaedic surgery.

    PubMed

    Van Ham, G; Denis, K; Vander Sloten, J; Van Audekercke, R; Van der Perre, G; De Schutter, J; Simon, J P; Fabry, G

    2005-05-01

    Bone cutting in total joint reconstructions requires a high accuracy to obtain a well-functioning and long-lasting prosthesis. Hence robot assistance can be useful to increase the precision of the surgical actions. A drawback of current robot systems is that they autonomously machine the bone, in that way ignoring the surgeon's experience and introducing a safety risk. This paper presents a semi-active milling procedure to overcome that drawback. In this procedure the surgeon controls robot motion by exerting forces on a force-controlled lever that is attached to the robot end effector. Meanwhile the robot constrains tool motion to the planned motion and generates a tool feed determined by the feed force that the surgeon executes. As a case study the presented milling procedure has been implemented on a laboratory set-up for robot-assisted preparation of the acetabulum in total hip arthroplasty. Two machining methods have been considered. In the first method the surgeon determines both milling trajectory and feed by the forces that he/she executes on the force-controlled lever. In the second method the cavity is machined contour by contour, and the surgeon only provides the feed. Machining experiments have shown that the first method results in large surface irregularities and is not useful. The second method, however, results in accurate cavity preparation and has therefore potential to be implemented in future robot systems.

  1. Procedure for Tooth Contact Analysis of a Face Gear Meshing With a Spur Gear Using Finite Element Analysis

    NASA Technical Reports Server (NTRS)

    Bibel, George; Lewicki, David G. (Technical Monitor)

    2002-01-01

    A procedure was developed to perform tooth contact analysis between a face gear meshing with a spur pinion using finite element analysis. The face gear surface points from a previous analysis were used to create a connected tooth solid model without gaps or overlaps. The face gear surface points were used to create a five tooth face gear Patran model (with rim) using Patran PCL commands. These commands were saved in a series of session files suitable for Patran input. A four tooth spur gear that meshes with the face gear was designed and constructed with Patran PCL commands. These commands were also saved in a session files suitable for Patran input. The orientation of the spur gear required for meshing with the face gear was determined. The required rotations and translations are described and built into the session file for the spur gear. The Abaqus commands for three-dimensional meshing were determined and verified for a simplified model containing one spur tooth and one face gear tooth. The boundary conditions, loads, and weak spring constraints were determined to make the simplified model work. The load steps and load increments to establish contact and obtain a realistic load was determined for the simplified two tooth model. Contact patterns give some insight into required mesh density. Building the two gears in two different local coordinate systems and rotating the local coordinate systems was verified as an easy way to roll the gearset through mesh. Due to limitation of swap space, disk space and time constraints of the summer period, the larger model was not completed.

  2. Activity Analysis and Cost Analysis in Medical Schools.

    ERIC Educational Resources Information Center

    Koehler, John E.; Slighton, Robert L.

    There is no unique answer to the question of what an ongoing program costs in medical schools. The estimates of program costs generated by classical methods of cost accounting are unsatisfactory because such accounting cannot deal with the joint production or joint cost problem. Activity analysis models aim at calculating the impact of alternative…

  3. Comprehensive procedural approach for transferring or comparative analysis of analogue IP building blocks towards different CMOS technologies

    NASA Astrophysics Data System (ADS)

    Gevaert, Dorine M.

    2009-05-01

    The challenges for the next generation of integrated circuit design of analogue and mixed-signal building blocks in standard CMOS technologies for signal conversion demand research progress in the emerging scientific fields of device physics and modelling, converter architectures, design automation, quality assurance and cost factor analysis. Estimation of mismatch for analogue building blocks at the conceptual level and the impact on active area is not a straightforward calculation. The proposed design concepts reduce the over-sizing of transistors, compared with the existing methods, with 15 to 20% for the same quality specification. Besides the reduction of the silicon cost also the design time cost for new topologies is reduced considerably. Comparison has been done for current mode converters (ADC and DAC) and focussing on downscaling technologies. The developed method offers an integrated approach on the estimation of architecture performances, yield and IP-reuse. Matching energy remains constant over process generations and will be the limiting factor for current signal processing. The comprehensive understanding of all sources of mismatches and the use of physical based mismatch modelling in the prediction of mismatch errors, more adequate and realistic sizing of all transistors will result in an overall area reduction of analogue IP blocks. For each technology the following design curves are automatically developed: noise curves for a specified signal bandwidth, choice of overdrive voltage versus lambda and output resistance, physical mismatch error modelling on target current levels. The procedural approach shares knowledge of several design curves and speeds up the design time.

  4. From Laser Scanning to Finite Element Analysis of Complex Buildings by Using a Semi-Automatic Procedure.

    PubMed

    Castellazzi, Giovanni; D'Altri, Antonio Maria; Bitelli, Gabriele; Selvaggi, Ilenia; Lambertini, Alessandro

    2015-01-01

    In this paper, a new semi-automatic procedure to transform three-dimensional point clouds of complex objects to three-dimensional finite element models is presented and validated. The procedure conceives of the point cloud as a stacking of point sections. The complexity of the clouds is arbitrary, since the procedure is designed for terrestrial laser scanner surveys applied to buildings with irregular geometry, such as historical buildings. The procedure aims at solving the problems connected to the generation of finite element models of these complex structures by constructing a fine discretized geometry with a reduced amount of time and ready to be used with structural analysis. If the starting clouds represent the inner and outer surfaces of the structure, the resulting finite element model will accurately capture the whole three-dimensional structure, producing a complex solid made by voxel elements. A comparison analysis with a CAD-based model is carried out on a historical building damaged by a seismic event. The results indicate that the proposed procedure is effective and obtains comparable models in a shorter time, with an increased level of automation.

  5. From Laser Scanning to Finite Element Analysis of Complex Buildings by Using a Semi-Automatic Procedure.

    PubMed

    Castellazzi, Giovanni; D'Altri, Antonio Maria; Bitelli, Gabriele; Selvaggi, Ilenia; Lambertini, Alessandro

    2015-01-01

    In this paper, a new semi-automatic procedure to transform three-dimensional point clouds of complex objects to three-dimensional finite element models is presented and validated. The procedure conceives of the point cloud as a stacking of point sections. The complexity of the clouds is arbitrary, since the procedure is designed for terrestrial laser scanner surveys applied to buildings with irregular geometry, such as historical buildings. The procedure aims at solving the problems connected to the generation of finite element models of these complex structures by constructing a fine discretized geometry with a reduced amount of time and ready to be used with structural analysis. If the starting clouds represent the inner and outer surfaces of the structure, the resulting finite element model will accurately capture the whole three-dimensional structure, producing a complex solid made by voxel elements. A comparison analysis with a CAD-based model is carried out on a historical building damaged by a seismic event. The results indicate that the proposed procedure is effective and obtains comparable models in a shorter time, with an increased level of automation. PMID:26225978

  6. From Laser Scanning to Finite Element Analysis of Complex Buildings by Using a Semi-Automatic Procedure

    PubMed Central

    Castellazzi, Giovanni; D’Altri, Antonio Maria; Bitelli, Gabriele; Selvaggi, Ilenia; Lambertini, Alessandro

    2015-01-01

    In this paper, a new semi-automatic procedure to transform three-dimensional point clouds of complex objects to three-dimensional finite element models is presented and validated. The procedure conceives of the point cloud as a stacking of point sections. The complexity of the clouds is arbitrary, since the procedure is designed for terrestrial laser scanner surveys applied to buildings with irregular geometry, such as historical buildings. The procedure aims at solving the problems connected to the generation of finite element models of these complex structures by constructing a fine discretized geometry with a reduced amount of time and ready to be used with structural analysis. If the starting clouds represent the inner and outer surfaces of the structure, the resulting finite element model will accurately capture the whole three-dimensional structure, producing a complex solid made by voxel elements. A comparison analysis with a CAD-based model is carried out on a historical building damaged by a seismic event. The results indicate that the proposed procedure is effective and obtains comparable models in a shorter time, with an increased level of automation. PMID:26225978

  7. Safety and Performance Analysis of the Non-Radar Oceanic/Remote Airspace In-Trail Procedure

    NASA Technical Reports Server (NTRS)

    Carreno, Victor A.; Munoz, Cesar A.

    2007-01-01

    This document presents a safety and performance analysis of the nominal case for the In-Trail Procedure (ITP) in a non-radar oceanic/remote airspace. The analysis estimates the risk of collision between the aircraft performing the ITP and a reference aircraft. The risk of collision is only estimated for the ITP maneuver and it is based on nominal operating conditions. The analysis does not consider human error, communication error conditions, or the normal risk of flight present in current operations. The hazards associated with human error and communication errors are evaluated in an Operational Hazards Analysis presented elsewhere.

  8. Production of 191Pt radiotracer with high specific activity for the development of preconcentration procedures

    NASA Astrophysics Data System (ADS)

    Parent, M.; Strijckmans, K.; Cornelis, R.; Dewaele, J.; Dams, R.

    1994-04-01

    A radiotracer of Pt with suitable nuclear characteristics and high specific activity (i.e. activity to mass ratio) is a powerful tool when developing preconcentration methods for the determination of base-line levels of Pt in e.g. environmental and biological samples. Two methods were developed for the production of 191Pt with high specific activity and radionuclidic purity: (1) via the 190Pt(n, γ) 191Pt reaction by neutron irradiation of enriched Pt in a nuclear reactor at high neutron fluence rate and (2) via the 191Ir(p, n) 191Pt reaction by proton irradiation of natural Ir with a cyclotron, at an experimentally optimized proton energy. For the latter method it was necessary to separate Pt from the Ir matrix. For that reason either liquid-liquid extraction with dithizone or adsorption chromatography were used. The yields, the specific activities and the radionuclidic purities were experimentally determined as a function of the proton energy and compared to the former method. The half-life of 191Pt was accurately determined to be 2.802 ± 0.025 d.

  9. Processes, Procedures, and Methods to Control Pollution Resulting from Silvicultural Activities.

    ERIC Educational Resources Information Center

    Environmental Protection Agency, Washington, DC. Office of Water Programs.

    This report presents brief documentation of silvicultural practices, both those now in use and those in stages of research and development. A majority of the text is concerned with the specific aspects of silvicultural activities which relate to nonpoint source pollution control methods. Analyzed are existing and near future pollution control…

  10. Easy Green: A Handbook of Earth-Smart Activities and Operating Procedures for Youth Programs.

    ERIC Educational Resources Information Center

    Westerman, Marty

    This book aims to help camp directors and programmers evaluate the environmental impact of camp practices, make informed environmental choices, and make environmental awareness a habit in all operations and activities. Section 1 discusses developing a personal environmental philosophy, and considering possibilities for camp environmental action in…

  11. CTEPP STANDARD OPERATING PROCEDURE FOR VIDEOTAPING CHILD ACTIVITIES (SOP-2.23)

    EPA Science Inventory

    This SOP describes the method for videotaping a preschool child at a home. The CTEPP main study will collect multimedia samples and questionnaire data at the homes of participants (adults and children) during 48-hr sampling periods. Videotaping the activities of 10% of these chi...

  12. Spline-based procedures for dose-finding studies with active control

    PubMed Central

    Helms, Hans-Joachim; Benda, Norbert; Zinserling, Jörg; Kneib, Thomas; Friede, Tim

    2015-01-01

    In a dose-finding study with an active control, several doses of a new drug are compared with an established drug (the so-called active control). One goal of such studies is to characterize the dose–response relationship and to find the smallest target dose concentration d*, which leads to the same efficacy as the active control. For this purpose, the intersection point of the mean dose–response function with the expected efficacy of the active control has to be estimated. The focus of this paper is a cubic spline-based method for deriving an estimator of the target dose without assuming a specific dose–response function. Furthermore, the construction of a spline-based bootstrap CI is described. Estimator and CI are compared with other flexible and parametric methods such as linear spline interpolation as well as maximum likelihood regression in simulation studies motivated by a real clinical trial. Also, design considerations for the cubic spline approach with focus on bias minimization are presented. Although the spline-based point estimator can be biased, designs can be chosen to minimize and reasonably limit the maximum absolute bias. Furthermore, the coverage probability of the cubic spline approach is satisfactory, especially for bias minimal designs. © 2014 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. PMID:25319931

  13. Auditing quality control procedures in a chemical pathology laboratory--a multiple regression analysis.

    PubMed

    Tillyer, C R; Gobin, P T; Ray, A K; Rimanova, H

    1992-07-01

    We undertook a retrospective analysis of the monthly test rejection rates and the monthly external quality assessment scheme performance indices for our laboratory's two automated analysers, and examined the association of these variables with measures of laboratory workload, manpower, staff training, instrument servicing, seasonal and temporal factors and changes of calibration, method and assigned internal quality control values. Using multiple linear regression and stepwise multiple linear regression, we found that test rejection rates differed significantly between instruments, and were highest on the instrument performing the widest variety and lowest volume of tests. On that instrument, rejection rates were significantly associated with the introduction of new staff and laboratory manpower levels, and also showed a highly significant trend upwards over the study period, independent of the effects of the other variables examined. External quality assessment scheme performance indices showed small trends over the study period. They were not related to the test rejection rates on either analyser but also showed a significant association with the introduction of new staff and a small but significant association with laboratory workload. We conclude that the training and introduction of new staff and decreased laboratory manpower levels may significantly increase the level of test rejection, and adherence to appropriate quality control protocols effectively maintains the quality of the laboratory's results, but may not be completely successful in filtering out the effects of some assignable causes of variation in test results. It is suggested that clinical laboratories use the statistical approach adopted here to identify factors which may be adversely affecting quality performance and running costs and to provide evidence that quality control procedures are both cost- and quality-effective.

  14. Utilisation of Blood Components in Cardiac Surgery: A Single-Centre Retrospective Analysis with Regard to Diagnosis-Related Procedures

    PubMed Central

    Geissler, Raoul Georg; Rotering, Heinrich; Buddendick, Hubert; Franz, Dominik; Bunzemeier, Holger; Roeder, Norbert; Kwiecien, Robert; Sibrowski, Walter; Scheld, Hans H.; Martens, Sven; Schlenke, Peter

    2015-01-01

    Background More blood components are required in cardiac surgery than in most other medical disciplines. The overall blood demand may increase as a function of the total number of cardiothoracic and vascular surgical interventions and their level of complexity, and also when considering the demographic ageing. Awareness has grown with respect to adverse events, such as transfusion-related immunomodulation by allogeneic blood supply, which can contribute to morbidity and mortality. Therefore, programmes of patient blood management (PBM) have been implemented to avoid unnecessary blood transfusions and to standardise the indication of blood transfusions more strictly with aim to improve patients' overall outcomes. Methods A comprehensive retrospective analysis of the utilisation of blood components in the Department of Cardiac Surgery at the University Hospital of Münster (UKM) was performed over a 4-year period. Based on a medical reporting system of all medical disciplines, which was established as part of a PBM initiative, all transfused patients in cardiac surgery and their blood components were identified in a diagnosis- and medical procedure-related system, which allows the precise allocation of blood consumption to interventional procedures in cardiac surgery, such as coronary or valve surgery. Results This retrospective single centre study included all in-patients in cardiac surgery at the UKM from 2009 to 2012, corresponding to a total of 1,405-1,644 cases per year. A blood supply was provided for 55.6-61.9% of the cardiac surgery patients, whereas approximately 9% of all in-patients at the UKM required blood transfusions. Most of the blood units were applied during cardiac valve surgery and during coronary surgery. Further surgical activities with considerable use of blood components included thoracic surgery, aortic surgery, heart transplantations and the use of artificial hearts. Under the measures of PBM in 2012 a noticeable decrease in the number of

  15. The effect of standard heat and filtration processing procedures on antimicrobial activity and hydrogen peroxide levels in honey

    PubMed Central

    Chen, Cuilan; Campbell, Leona T.; Blair, Shona E.; Carter, Dee A.

    2012-01-01

    There is increasing interest in the antimicrobial properties of honey. In most honey types, antimicrobial activity is due to the generation of hydrogen peroxide (H2O2), but this can vary greatly among samples. Honey is a complex product and other components may modulate activity, which can be further affected by commercial processing procedures. In this study we examined honey derived from three native Australian floral sources that had previously been associated with H2O2-dependent activity. Antibacterial activity was seen in four red stringybark samples only, and ranged from 12 to 21.1% phenol equivalence against Staphylococcus aureus. Antifungal activity ranged from MIC values of 19–38.3% (w/v) against Candida albicans, and all samples were significantly more active than an osmotically equivalent sugar solution. All honey samples were provided unprocessed and following commercial processing. Processing was usually detrimental to antimicrobial activity, but occasionally the reverse was seen and activity increased. H2O2 levels varied from 0 to 1017 μM, and although samples with no H2O2 had little or no antimicrobial activity, some samples had relatively high H2O2 levels yet no antimicrobial activity. In samples where H2O2 was detected, the correlation with antibacterial activity was greater in the processed than in the unprocessed samples, suggesting other factors present in the honey influence this activity and are sensitive to heat treatment. Antifungal activity did not correlate with the level of H2O2 in honey samples, and overall it appeared that H2O2 alone was not sufficient to inhibit C. albicans. We conclude that floral source and H2O2 levels are not reliable predictors of the antimicrobial activity of honey, which currently can only be assessed by standardized antimicrobial testing. Heat processing should be reduced where possible, and honey destined for medicinal use should be retested post-processing to ensure that activity levels have not changed

  16. Effect of different detoxification procedures on the residual pertussis toxin activities in vaccines.

    PubMed

    Yuen, Chun-Ting; Asokanathan, Catpagavalli; Cook, Sarah; Lin, Naomi; Xing, Dorothy

    2016-04-19

    Pertussis toxin (PTx) is a major virulence factor produced by Bordetella pertussis and its detoxified form is one of the major protective antigens in vaccines against whooping cough. Ideally, PTx in the vaccine should be completely detoxified while still preserving immunogenicity. However, this may not always be the case. Due to multilevel reaction mechanisms of chemical detoxification that act on different molecular sites and with different production processes, it is difficult to define a molecular characteristic of a pertussis toxoid. PTx has two functional distinctive domains: the ADP-ribosyltransferase enzymatic subunit S1 (A-protomer) and the host cell binding carbohydrate-binding subunits S2-5 (B-oligomer); and in this study, we investigated the effect of different detoxification processes on these two functional activities of the residual PTx in toxoids and vaccines currently marketed worldwide using a recently developed in vitro biochemical assay system. The patho-physiological activities in these samples were also estimated using the in vivo official histamine sensitisation tests. Different types of vaccines, detoxified by formaldehyde, glutaraldehyde or by both, have different residual functional and individual baseline activities. Of the vaccines tested, PT toxoid detoxified by formaldehyde had the lowest residual PTx ADP-ribosyltransferase activity. The carbohydrate binding results detected by anti-PTx polyclonal (pAb) and anti-PTx subunits monoclonal antibodies (mAb) showed specific binding profiles for toxoids and vaccines produced from different detoxification methods. In addition, we also demonstrated that using pAb or mAb S2/3 as detection antibodies would give a better differential difference between these vaccine lots than using mAbs S1 or S4. In summary, we showed for the first time that by measuring the activities of the two functional domains of PTx, we could characterise pertussis toxoids prepared from different chemical detoxification

  17. New electrochemical procedure for obtaining surface enhanced Raman scattering active polythiophene films on platinum

    NASA Astrophysics Data System (ADS)

    Bazzaoui, E. A.; Aeiyach, S.; Aubard, J.; Felidj, N.; Lévi, G.; Sakmeche, N.; Lacaze, P. C.

    1998-06-01

    A new electrochemical procedure for obtaining Surface Enhanced Raman Scattering (SERS) spectra of silver islands polybithiophene composite films is described. During the electropolymerization process which consists to use silver dodecylsulfate micellar aqueous solution mixed with bithiophene and LiClO4, silver cations are reduced, thus giving metallic silver particles embedded within the polybithiophene (PbT) film. Both doped and undoped PbT species display SERS spectra with exaltation factors varying between 40 and 200 with respect to the film prepared in sodium dodecylsulfate. Vibrational characterization of both doped and undoped species show that the amount of the polymer structural defects are more important in the oxidized species than in the reduced ones. This general method allows to synthesize various polymeric films displaying SERS effect and appears very promising for the structural study of these materials. Nous décrivons un procédé original pour synthétiser par voie électrochimique des films formés d'un composite de polybithiophène et d'îlots d'argent qui présentent des Spectres de Diffusion Raman Exaltée de Surface (DRES). Au cours de l'électropolymérisation d'une solution aqueuse micellaire de bithiophène en présence de dodécylsulfate d'argent (AgDS) et de LiClO4, les ions argent présents dans la solution se complexent avec le soufre du bithiophène et pénètrent dans le film polymère où ils sont réduits sous forme d'argent métallique. Les spectres Raman des deux formes réduite et oxydée du film ainsi obtenu présentent un effet DRES important avec un facteur d'exaltation variant entre 40 et 200 par rapport au même film électrosynthétisé en présence de dodécylsulfate de sodium (SDS). L'analyse vibrationnelle des deux formes redox montre que le taux de défauts est plus important dans la forme oxydée que dans la forme réduite. Cette méthode de polymérisation très générale, qui permet d'obtenir des polymères

  18. Activation of the vastus medialis oblique and vastus lateralis muscles in asymptomatic subjects during the sit-to-stand procedure

    PubMed Central

    Choi, Boram

    2015-01-01

    [Purpose] The purpose of this study was to examine the vastus medialis oblique to vastus lateralis ratio in two pelvic tilt positions while performing the sit-to-stand task. [Subjects and Methods] Activation of the vastus medialis oblique and the vastus lateralis muscles of 46 healthy subjects (25 males, 21 females) were recorded by surface electromyography during the STS task with anterior pelvic tilt (sit with thoracolumbar spine extended and pelvis in an anterior tilt) and neutral pelvic tilt (sit with thoracolumbar spine relaxed and pelvis in the neutral tilt position) positions. Changes in vastus medialis oblique, vastus lateralis activation and the vastus medialis oblique/vastus lateralis ratio were analyzed. [Results] Vastus medialis oblique and vastus lateralis muscle activation significantly increased in neutral pelvic tilt position, but the vastus medialis oblique/vastus lateralis ratio was not statistically different. [Conclusion] The sit-to-stand procedure with neutral pelvic tilt position increased activation of the vastus medialis oblique and vastus lateralis, usefully strengthening the quadriceps, but did not selectively activate the vastus medialis oblique muscle. PMID:25931753

  19. Neutron activation analysis; A sensitive test for trace elements

    SciTech Connect

    Hossain, T.Z. . Ward Lab.)

    1992-01-01

    This paper discusses neutron activation analysis (NAA), an extremely sensitive technique for determining the elemental constituents of an unknown specimen. Currently, there are some twenty-five moderate-power TRIGA reactors scattered across the United States (fourteen of them at universities), and one of their principal uses is for NAA. NAA is procedurally simple. A small amount of the material to be tested (typically between one and one hundred milligrams) is irradiated for a period that varies from a few minutes to several hours in a neutron flux of around 10{sup 12} neutrons per square centimeter per second. A tiny fraction of the nuclei present (about 10{sup {minus}8}) is transmuted by nuclear reactions into radioactive forms. Subsequently, the nuclei decay, and the energy and intensity of the gamma rays that they emit can be measured in a gamma-ray spectrometer.

  20. Analysis of regression methods for solar activity forecasting

    NASA Technical Reports Server (NTRS)

    Lundquist, C. A.; Vaughan, W. W.

    1979-01-01

    The paper deals with the potential use of the most recent solar data to project trends in the next few years. Assuming that a mode of solar influence on weather can be identified, advantageous use of that knowledge presumably depends on estimating future solar activity. A frequently used technique for solar cycle predictions is a linear regression procedure along the lines formulated by McNish and Lincoln (1949). The paper presents a sensitivity analysis of the behavior of such regression methods relative to the following aspects: cycle minimum, time into cycle, composition of historical data base, and unnormalized vs. normalized solar cycle data. Comparative solar cycle forecasts for several past cycles are presented as to these aspects of the input data. Implications for the current cycle, No. 21, are also given.

  1. Genomic features of uncultured methylotrophs in activated-sludge microbiomes grown under different enrichment procedures.

    PubMed

    Fujinawa, Kazuki; Asai, Yusuke; Miyahara, Morio; Kouzuma, Atsushi; Abe, Takashi; Watanabe, Kazuya

    2016-01-01

    Methylotrophs are organisms that are able to grow on C1 compounds as carbon and energy sources. They play important roles in the global carbon cycle and contribute largely to industrial wastewater treatment. To identify and characterize methylotrophs that are involved in methanol degradation in wastewater-treatment plants, methanol-fed activated-sludge (MAS) microbiomes were subjected to phylogenetic and metagenomic analyses, and genomic features of dominant methylotrophs in MAS were compared with those preferentially grown in laboratory enrichment cultures (LECs). These analyses consistently indicate that Hyphomicrobium plays important roles in MAS, while Methylophilus occurred predominantly in LECs. Comparative analyses of bin genomes reconstructed for the Hyphomicrobium and Methylophilus methylotrophs suggest that they have different C1-assimilation pathways. In addition, function-module analyses suggest that their cell-surface structures are different. Comparison of the MAS bin genome with genomes of closely related Hyphomicrobium isolates suggests that genes unnecessary in MAS (for instance, genes for anaerobic respiration) have been lost from the genome of the dominant methylotroph. We suggest that genomic features and coded functions in the MAS bin genome provide us with insights into how this methylotroph adapts to activated-sludge ecosystems. PMID:27221669

  2. Tobacco Control: Visualisation of Research Activity Using Density-Equalizing Mapping and Scientometric Benchmarking Procedures

    PubMed Central

    Kusma, Bianca; Scutaru, Cristian; Quarcoo, David; Welte, Tobias; Fischer, Tanja C.; Groneberg-Kloft, Beatrix

    2009-01-01

    Background: Tobacco smoking continues to be a major preventable cause of death and disease and therefore tobacco control research is extremely important. However, research in this area is often hampered by a lack in funding and there is a need for scientometric techniques to display research efforts. Methods: The present study combines classical bibliometric tools with novel scientometric and visualizing techniques in order to analyse and categorise research in the field of tobacco control. Results: All studies related to tobacco control and listed in the ISI database since 1900 were identified by the use of defined search terms. Using bibliometric approaches, a continuous increase in qualitative markers such as collaboration numbers or citations were found for tobacco control research. The combination with density equalizing mapping revealed a distinct global pattern of research productivity and citation activity. Radar chart techniques were used to visualize bi- and multilateral research cooperation and institutional cooperation. Conclusions: The present study supplies a first scientometric approach that visualises research activity in the field of tobacco control. It provides data that can be used for funding policy and the identification of research clusters. PMID:19578464

  3. Genomic features of uncultured methylotrophs in activated-sludge microbiomes grown under different enrichment procedures

    PubMed Central

    Fujinawa, Kazuki; Asai, Yusuke; Miyahara, Morio; Kouzuma, Atsushi; Abe, Takashi; Watanabe, Kazuya

    2016-01-01

    Methylotrophs are organisms that are able to grow on C1 compounds as carbon and energy sources. They play important roles in the global carbon cycle and contribute largely to industrial wastewater treatment. To identify and characterize methylotrophs that are involved in methanol degradation in wastewater-treatment plants, methanol-fed activated-sludge (MAS) microbiomes were subjected to phylogenetic and metagenomic analyses, and genomic features of dominant methylotrophs in MAS were compared with those preferentially grown in laboratory enrichment cultures (LECs). These analyses consistently indicate that Hyphomicrobium plays important roles in MAS, while Methylophilus occurred predominantly in LECs. Comparative analyses of bin genomes reconstructed for the Hyphomicrobium and Methylophilus methylotrophs suggest that they have different C1-assimilation pathways. In addition, function-module analyses suggest that their cell-surface structures are different. Comparison of the MAS bin genome with genomes of closely related Hyphomicrobium isolates suggests that genes unnecessary in MAS (for instance, genes for anaerobic respiration) have been lost from the genome of the dominant methylotroph. We suggest that genomic features and coded functions in the MAS bin genome provide us with insights into how this methylotroph adapts to activated-sludge ecosystems. PMID:27221669

  4. Neutron activation analysis of major, minor, and trace elements in marine sediments

    SciTech Connect

    Stone, S.F.; Zeisler, R.; Koster, B.J.

    1988-01-01

    Neutron activation analysis (NAA) techniques are well established in the multielement assay of geological materials. Similarly, applications of NAA to the analysis of marine sediments have been described. The different emphasis on elemental composition in studying and monitoring the health of the environment, however, presents a new challenge to the analyst. To investigate as many elements as possible, previous multielement procedures need to be reevaluated and modified. In this work, the authors have utilized the NAA steps of a recently developed sequential analysis procedure that obtained concentrations for 45 biological and pollutant elements in marine bivalves. This procedure, with modification, was applied to samples of marine sediments collected for the National Oceanic and Atmospheric Administration (NOAA) National Status and Trends (NS T) specimen banking program.

  5. Using CV-GLUE procedure in analysis of wetland model predictive uncertainty.

    PubMed

    Huang, Chun-Wei; Lin, Yu-Pin; Chiang, Li-Chi; Wang, Yung-Chieh

    2014-07-01

    This study develops a procedure that is related to Generalized Likelihood Uncertainty Estimation (GLUE), called the CV-GLUE procedure, for assessing the predictive uncertainty that is associated with different model structures with varying degrees of complexity. The proposed procedure comprises model calibration, validation, and predictive uncertainty estimation in terms of a characteristic coefficient of variation (characteristic CV). The procedure first performed two-stage Monte-Carlo simulations to ensure predictive accuracy by obtaining behavior parameter sets, and then the estimation of CV-values of the model outcomes, which represent the predictive uncertainties for a model structure of interest with its associated behavior parameter sets. Three commonly used wetland models (the first-order K-C model, the plug flow with dispersion model, and the Wetland Water Quality Model; WWQM) were compared based on data that were collected from a free water surface constructed wetland with paddy cultivation in Taipei, Taiwan. The results show that the first-order K-C model, which is simpler than the other two models, has greater predictive uncertainty. This finding shows that predictive uncertainty does not necessarily increase with the complexity of the model structure because in this case, the more simplistic representation (first-order K-C model) of reality results in a higher uncertainty in the prediction made by the model. The CV-GLUE procedure is suggested to be a useful tool not only for designing constructed wetlands but also for other aspects of environmental management.

  6. Using CV-GLUE procedure in analysis of wetland model predictive uncertainty.

    PubMed

    Huang, Chun-Wei; Lin, Yu-Pin; Chiang, Li-Chi; Wang, Yung-Chieh

    2014-07-01

    This study develops a procedure that is related to Generalized Likelihood Uncertainty Estimation (GLUE), called the CV-GLUE procedure, for assessing the predictive uncertainty that is associated with different model structures with varying degrees of complexity. The proposed procedure comprises model calibration, validation, and predictive uncertainty estimation in terms of a characteristic coefficient of variation (characteristic CV). The procedure first performed two-stage Monte-Carlo simulations to ensure predictive accuracy by obtaining behavior parameter sets, and then the estimation of CV-values of the model outcomes, which represent the predictive uncertainties for a model structure of interest with its associated behavior parameter sets. Three commonly used wetland models (the first-order K-C model, the plug flow with dispersion model, and the Wetland Water Quality Model; WWQM) were compared based on data that were collected from a free water surface constructed wetland with paddy cultivation in Taipei, Taiwan. The results show that the first-order K-C model, which is simpler than the other two models, has greater predictive uncertainty. This finding shows that predictive uncertainty does not necessarily increase with the complexity of the model structure because in this case, the more simplistic representation (first-order K-C model) of reality results in a higher uncertainty in the prediction made by the model. The CV-GLUE procedure is suggested to be a useful tool not only for designing constructed wetlands but also for other aspects of environmental management. PMID:24726969

  7. Influence of the derivatization procedure on the results of the gaschromatographic fatty acid analysis of human milk and infant formulae.

    PubMed

    Kohn, G; van der Ploeg, P; Möbius, M; Sawatzki, G

    1996-09-01

    Many different analytical procedures for fatty acid analysis of infant formulae and human milk are described. The objective was to study possible pitfalls in the use of different acid-catalyzed procedures compared to a base-catalyzed procedure based on sodium-methoxide in methanol. The influence of the different methods on the relative fatty acid composition (wt% of total fatty acids) and the total fatty acid recovery rate (expressed as % of total lipids) was studied in two experimental LCP-containing formulae and a human milk sample. MeOH/HCl-procedures were found to result in an incomplete transesterification of triglycerides, if an additional nonpolar solvent like toluene or hexane is not added and a water-free preparation is not guaranteed. In infant formulae the low transesterification of triglycerides (up to only 37%) could result in an 100%-overestimation of the relative amount of LCP, if these fatty acids primarily derive from phospholipids. This is the case in infant formulae containing egg lipids as raw materials. In formula containing fish oils and in human milk the efficacy of esterification results in incorrect absolute amounts of fatty acids, but has no remarkable effect on the relative fatty acid distribution. This is due to the fact that in these samples LCP are primarily bound to triglycerides. Furthermore, in formulae based on butterfat the derivatization procedure should be designed in such a way that losses of short-chain fatty acids due to evaporation steps can be avoided. The procedure based on sodium methoxide was found to result in a satisfactory (about 90%) conversion of formula lipids and a reliable content of all individual fatty acids. Due to a possibly high amount of free fatty acids in human milk, which are not methylated by sodium-methoxide, caution is expressed about the use of this reagent for fatty acid analysis of mothers milk. It is concluded that accurate fatty acid analysis of infant formulae and human milk requires a careful

  8. Transport of Space Environment Electrons: A Simplified Rapid-Analysis Computational Procedure

    NASA Technical Reports Server (NTRS)

    Nealy, John E.; Anderson, Brooke M.; Cucinotta, Francis A.; Wilson, John W.; Katz, Robert; Chang, C. K.

    2002-01-01

    A computational procedure for describing transport of electrons in condensed media has been formulated for application to effects and exposures from spectral distributions typical of electrons trapped in planetary magnetic fields. The procedure is based on earlier parameterizations established from numerous electron beam experiments. New parameterizations have been derived that logically extend the domain of application to low molecular weight (high hydrogen content) materials and higher energies (approximately 50 MeV). The production and transport of high energy photons (bremsstrahlung) generated in the electron transport processes have also been modeled using tabulated values of photon production cross sections. A primary purpose for developing the procedure has been to provide a means for rapidly performing numerous repetitive calculations essential for electron radiation exposure assessments for complex space structures. Several favorable comparisons have been made with previous calculations for typical space environment spectra, which have indicated that accuracy has not been substantially compromised at the expense of computational speed.

  9. Learning, forgetting, and hospital quality: an empirical analysis of cardiac procedures in Maryland and Arizona.

    PubMed

    Sfekas, Andrew

    2009-06-01

    This paper sets out an empirical model of learning with forgetting and uses it to estimate how much hospital quality improves with experience. The size of the learning effect and the depreciation rate are estimated for two cardiac procedures in Maryland and Arizona. Models are estimated using patient survival as the outcome of interest. The results show that learning does not appear to be a factor in hospital quality for either procedure or for surgery generally. From a policy standpoint, based on these results, regulations in Maryland that seek to concentrate these two procedures among a small number of providers could not be justified on the grounds that higher volume would increase the quality of care.

  10. Mobile Energy Laboratory Procedures

    SciTech Connect

    Armstrong, P.R.; Batishko, C.R.; Dittmer, A.L.; Hadley, D.L.; Stoops, J.L.

    1993-09-01

    Pacific Northwest Laboratory (PNL) has been tasked to plan and implement a framework for measuring and analyzing the efficiency of on-site energy conversion, distribution, and end-use application on federal facilities as part of its overall technical support to the US Department of Energy (DOE) Federal Energy Management Program (FEMP). The Mobile Energy Laboratory (MEL) Procedures establish guidelines for specific activities performed by PNL staff. PNL provided sophisticated energy monitoring, auditing, and analysis equipment for on-site evaluation of energy use efficiency. Specially trained engineers and technicians were provided to conduct tests in a safe and efficient manner with the assistance of host facility staff and contractors. Reports were produced to describe test procedures, results, and suggested courses of action. These reports may be used to justify changes in operating procedures, maintenance efforts, system designs, or energy-using equipment. The MEL capabilities can subsequently be used to assess the results of energy conservation projects. These procedures recognize the need for centralized NM administration, test procedure development, operator training, and technical oversight. This need is evidenced by increasing requests fbr MEL use and the economies available by having trained, full-time MEL operators and near continuous MEL operation. DOE will assign new equipment and upgrade existing equipment as new capabilities are developed. The equipment and trained technicians will be made available to federal agencies that provide funding for the direct costs associated with MEL use.

  11. Comparative Sensitivity Analysis of Muscle Activation Dynamics.

    PubMed

    Rockenfeller, Robert; Günther, Michael; Schmitt, Syn; Götz, Thomas

    2015-01-01

    We mathematically compared two models of mammalian striated muscle activation dynamics proposed by Hatze and Zajac. Both models are representative for a broad variety of biomechanical models formulated as ordinary differential equations (ODEs). These models incorporate parameters that directly represent known physiological properties. Other parameters have been introduced to reproduce empirical observations. We used sensitivity analysis to investigate the influence of model parameters on the ODE solutions. In addition, we expanded an existing approach to treating initial conditions as parameters and to calculating second-order sensitivities. Furthermore, we used a global sensitivity analysis approach to include finite ranges of parameter values. Hence, a theoretician striving for model reduction could use the method for identifying particularly low sensitivities to detect superfluous parameters. An experimenter could use it for identifying particularly high sensitivities to improve parameter estimation. Hatze's nonlinear model incorporates some parameters to which activation dynamics is clearly more sensitive than to any parameter in Zajac's linear model. Other than Zajac's model, Hatze's model can, however, reproduce measured shifts in optimal muscle length with varied muscle activity. Accordingly we extracted a specific parameter set for Hatze's model that combines best with a particular muscle force-length relation. PMID:26417379

  12. Neutron activation analysis in archaeological chemistry

    SciTech Connect

    Harbottle, G.

    1987-01-01

    Neutron activation analysis has proven to be a convenient way of performing the chemical analysis of archaeologically-excavated artifacts and materials. It is fast and does not require tedious laboratory operations. It is multielement, sensitive, and can be made nondestructive. Neutron activation analysis in its instrumental form, i.e., involving no chemical separation, is ideally suited to automation and conveniently takes the first step in data flow patterns that are appropriate for many taxonomic and statistical operations. The future will doubtless see improvements in the practice of NAA in general, but in connection with archaeological science the greatest change will be the filling, interchange and widespread use of data banks based on compilations of analytical data. Since provenience-oriented data banks deal with materials (obsidian, ceramics, metals, semiprecious stones, building materials and sculptural media) that participated in trade networks, the analytical data is certain to be of interest to a rather broad group of archaeologists. It is to meet the needs of the whole archaeological community that archaeological chemistry must now turn.

  13. Investigation of Deterioration Behavior of Hysteretic Loops in Nonlinear Static Procedure Analysis of Concrete Structures with Shear Walls

    SciTech Connect

    Ghodrati Amiri, G.; Amidi, S.; Khorasani, M.

    2008-07-08

    In the recent years, scientists developed the seismic rehabilitation of structures and their view points were changed from sufficient strength to the performance of structures (Performance Base Design) to prepare a safe design. Nonlinear Static Procedure analysis (NSP) or pushover analysis is a new method that is chosen for its speed and simplicity in calculations. 'Seismic Rehabilitation Code for Existing Buildings' and FEMA 356 considered this method. Result of this analysis is a target displacement that is the base of the performance and rehabilitation procedure of the structures. Exact recognition of that displacement could develop the workability of pushover analysis. In these days, Nonlinear Dynamic Analysis (NDP) is only method can exactly apply the seismic ground motions. In this case because it consumes time, costs very high and is more difficult than other methods, is not applicable as much as NSP. A coefficient used in NSP for determining the target displacement is C2 (Stiffness and Strength Degradations Coefficient) and is applicable for correcting the errors due to eliminating the stiffness and strength degradations in hysteretic loops. In this study it has been tried to analysis three concrete frames with shear walls by several accelerations that scaled according to FEMA 273 and FEMA 356. These structures were designed with Iranian 2800 standard (vers.3). Finally after the analyzing by pushover method and comparison results with dynamic analysis, calculated C2 was comprised with values in rehabilitation codes.

  14. Simple procedure for the synthesis of high specific activity tritiated (6S)-5-formyltetrahydrofolate

    SciTech Connect

    Moran, R.G.; Colman, P.D.

    1982-05-01

    The 5-position of tetrahydrofolate was found to be unusually reactive with low concentrations of formic acid in the presence of a water-soluble carbodiimide. The product of this reaction has neutral and acid ultraviolet spectra and chromatographic behavior consistent with its identity as 5-formyltetrahydrofolate (leucovoriun). When enzymatically synthesized (6S)-tetrahydrofolate was used as starting material, the product supported the growth of folate-depleted L1210 cells at one-half the concentration required for authentic (6R,S)-leucovorin. This reaction has been used to produce high specific activity (44 Ci/mmol) (/sup 3/H)(6S)-5-formyltetrahydrofolate in high yield. Experiments with (/sup 14/C)formic acid indicate that 1 mol of formate reacted per mol of tetrahydrofolate but that no reaction occurred with a variety of other folate compounds. (6S)-5-Formyltetrahydrofolate, labeled in the formyl group with /sup 14/C, has also been synthesized using this reaction. These easily produced, labeled folates should allow close examination of the transport and utilization of leucovorin and of the mechanism of reversal of methotrexate toxicity by reduced folate cofactors.

  15. An Analysis of the Relationship Between Readability of Air Force Procedural Manuals and Discrepancies Involving Non-Compliance with the Procedures.

    ERIC Educational Resources Information Center

    Johnson, Keith H.; And Others

    Readability of Air Force logistics procedural manuals is generally too high for their readers. The readers, from different Air Force Specialties (AFS), are faced with a readability/reading ability gap when using the procedural manuals. This 'gap' was found to correlate directly with the frequency of discrepancies actually found over a two-year…

  16. To what extent are surgery and invasive procedures effective beyond a placebo response? A systematic review with meta-analysis of randomised, sham controlled trials

    PubMed Central

    Jonas, Wayne B; Crawford, Cindy; Colloca, Luana; Kaptchuk, Ted J; Moseley, Bruce; Miller, Franklin G; Kriston, Levente; Linde, Klaus; Meissner, Karin

    2015-01-01

    Objectives To assess the quantity and quality of randomised, sham-controlled studies of surgery and invasive procedures and estimate the treatment-specific and non-specific effects of those procedures. Design Systematic review and meta-analysis. Data sources We searched PubMed, EMBASE, CINAHL, CENTRAL (Cochrane Library), PILOTS, PsycInfo, DoD Biomedical Research, clinicaltrials.gov, NLM catalog and NIH Grantee Publications Database from their inception through January 2015. Study selection We included randomised controlled trials of surgery and invasive procedures that penetrated the skin or an orifice and had a parallel sham procedure for comparison. Data extraction and analysis Three authors independently extracted data and assessed risk of bias. Studies reporting continuous outcomes were pooled and the standardised mean difference (SMD) with 95% CIs was calculated using a random effects model for difference between true and sham groups. Results 55 studies (3574 patients) were identified meeting inclusion criteria; 39 provided sufficient data for inclusion in the main analysis (2902 patients). The overall SMD of the continuous primary outcome between treatment/sham-control groups was 0.34 (95% CI 0.20 to 0.49; p<0.00001; I2=67%). The SMD for surgery versus sham surgery was non-significant for pain-related conditions (n=15, SMD=0.13, p=0.08), marginally significant for studies on weight loss (n=10, SMD=0.52, p=0.05) and significant for gastroesophageal reflux disorder (GERD) studies (n=5, SMD=0.65, p<0.001) and for other conditions (n=8, SMD=0.44, p=0.004). Mean improvement in sham groups relative to active treatment was larger in pain-related conditions (78%) and obesity (71%) than in GERD (57%) and other conditions (57%), and was smaller in classical-surgery trials (21%) than in endoscopic trials (73%) and those using percutaneous procedures (64%). Conclusions The non-specific effects of surgery and other invasive procedures are generally large. Particularly in

  17. Analysis of low levels of rare earths by radiochemical neutron activation analysis

    USGS Publications Warehouse

    Wandless, G.A.; Morgan, J.W.

    1985-01-01

    A procedure for the radiochemical neutron-activation analysis for the rare earth elements (REE) involves the separation of the REE as a group by rapid ion-exchange methods and determination of yields by reactivation or by energy dispersive X-ray fluorescence (EDXRF) spectrometry. The U. S. Geological Survey (USGS) standard rocks, BCR-1 and AGV-1, were analyzed to determine the precision and accuracy of the method. We found that the precision was ??5-10% on the basis of replicate analysis and that, in general the accuracy was within ??5% of accepted values for most REE. Data for USGS standard rocks BIR-1 (Icelandic basalt) and DNC-1 (North Carolina diabase) are also presented. ?? 1985 Akade??miai Kiado??.

  18. Analysis of DOE international environmental management activities

    SciTech Connect

    Ragaini, R.C.

    1995-09-01

    The Department of Energy`s (DOE) Strategic Plan (April 1994) states that DOE`s long-term vision includes world leadership in environmental restoration and waste management activities. The activities of the DOE Office of Environmental Management (EM) can play a key role in DOE`s goals of maintaining U.S. global competitiveness and ensuring the continuation of a world class science and technology community. DOE`s interest in attaining these goals stems partly from its participation in organizations like the Trade Policy Coordinating Committee (TPCC), with its National Environmental Export Promotion Strategy, which seeks to strengthen U.S. competitiveness and the building of public-private partnerships as part of U.S. industrial policy. The International Interactions Field Office task will build a communication network which will facilitate the efficient and effective communication between DOE Headquarters, Field Offices, and contractors. Under this network, Headquarters will provide the Field Offices with information on the Administration`s policies and activities (such as the DOE Strategic Plan), interagency activities, as well as relevant information from other field offices. Lawrence Livermore National Laboratory (LLNL) will, in turn, provide Headquarters with information on various international activities which, when appropriate, will be included in reports to groups like the TPCC and the EM Focus Areas. This task provides for the collection, review, and analysis of information on the more significant international environmental restoration and waste management initiatives and activities which have been used or are being considered at LLNL. Information gathering will focus on efforts and accomplishments in meeting the challenges of providing timely and cost effective cleanup of its environmentally damaged sites and facilities, especially through international technical exchanges and/or the implementation of foreign-development technologies.

  19. 7 CFR 201.51a - Special procedures for purity analysis.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... kinds. After completing the blowing procedure, remove all weed and other crop seeds from the light portion and add these to the weed or other crop separation, as appropriate. The remainder of the light portion shall be considered inert matter. Remove all weed and other crop seeds and other inert...

  20. 7 CFR 201.51a - Special procedures for purity analysis.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... kinds. After completing the blowing procedure, remove all weed and other crop seeds from the light portion and add these to the weed or other crop separation, as appropriate. The remainder of the light portion shall be considered inert matter. Remove all weed and other crop seeds and other inert...

  1. 7 CFR 201.51a - Special procedures for purity analysis.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... kinds. After completing the blowing procedure, remove all weed and other crop seeds from the light portion and add these to the weed or other crop separation, as appropriate. The remainder of the light portion shall be considered inert matter. Remove all weed and other crop seeds and other inert...

  2. 7 CFR 201.51a - Special procedures for purity analysis.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... kinds. After completing the blowing procedure, remove all weed and other crop seeds from the light portion and add these to the weed or other crop separation, as appropriate. The remainder of the light portion shall be considered inert matter. Remove all weed and other crop seeds and other inert...

  3. 7 CFR 201.51a - Special procedures for purity analysis.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... kinds. After completing the blowing procedure, remove all weed and other crop seeds from the light portion and add these to the weed or other crop separation, as appropriate. The remainder of the light portion shall be considered inert matter. Remove all weed and other crop seeds and other inert...

  4. Facilitating Generalized Requesting Behavior in Broca's Aphasia: An Experimental Analysis of a Generalization Training Procedure.

    ERIC Educational Resources Information Center

    Doyle, Patrick J.; And Others

    1989-01-01

    The effects of a generalization training procedure on requesting by four adult subjects with Broca's aphasia were examined. Results revealed that generalization effects were greatest when trainers, as opposed to unfamiliar volunteers, served as conversational participants. Subjects' requests increased to a level comparable to a normal comparison…

  5. Finite element procedures for coupled linear analysis of heat transfer, fluid and solid mechanics

    NASA Technical Reports Server (NTRS)

    Sutjahjo, Edhi; Chamis, Christos C.

    1993-01-01

    Coupled finite element formulations for fluid mechanics, heat transfer, and solid mechanics are derived from the conservation laws for energy, mass, and momentum. To model the physics of interactions among the participating disciplines, the linearized equations are coupled by combining domain and boundary coupling procedures. Iterative numerical solution strategy is presented to solve the equations, with the partitioning of temporal discretization implemented.

  6. Recession Vs Myotomy–Comparative Analysis of Two Surgical Procedures of Weakening Inferior Oblique Muscle Overaction

    PubMed Central

    Alajbegovic-Halimic, Jasmina; Zvizdic, Denisa; Sahbegovic-Holcner, Amra; Kulanic-Kuduzovic, Amira

    2015-01-01

    Introduction: Inferior oblique overaction (IOOA) can be primary or secondary, isolated or combined to other types of horizontal deviation, mostly with esotropias. Surgical weakening of IOOA means several techniques like; recession, myotomy, myectomy, anteroposition etc. Goals: we analyzed the effect of inferior oblique muscle surgical weakening comparing two groups of patients with primary hypertropia. Material and methods: In 5-years retrospective study, we observed 33 patients on which we did the surgical procedure of weakening inferior muscle overaction by two methods; recession and myotomy. Results: In total number of 33 patients, there were 57,6% male and 42,4% female patients with average age of 10,6±7,5 (in range of 4–36). There was 33,3% of isolated primary hypertropias, and 66,7% combined with esotropias. At 23 (69,9%) patients the recession surgical procedure was done, and with 10 (30,1%) myotomy. Better effect and binocularity was in 65,2% of patients in recession group which was statistically significant with significance level of p<0,0, χ2=5,705; p=0,021. Conclusion: Comparing of two surgical procedures of weakening inferior oblique muscles overaction, recession is better procedure than myotomy. PMID:26261384

  7. Systems and Procedures of Certification of Qualifications in the European Community. Comparative Analysis. CEDEFOP Panorama.

    ERIC Educational Resources Information Center

    Gordon, Jean

    The European Centre for the Development of Vocational Training (CEDEFOP) commissioned 12 experts in the vocational training (VT) systems of the individual member states of the European Community (EC) to develop monographs describing the EC members' VT systems and procedures/systems for certifying vocational qualifications. The 12 national studies…

  8. A Qualitative Analysis of the Determinants in the Choice of a French Journal Reviewing Procedures

    ERIC Educational Resources Information Center

    Morge, Ludovic

    2015-01-01

    Between 1993 and 2010, two French journals (Aster and Didaskalia) coming from different backgrounds but belonging to the same institution used to publish papers on research in science and technology education. The merging of these journals made it necessary for them to compare the different reviewing procedures used by each. This merging occurred…

  9. A qualitative analysis of the determinants in the choice of a French journal reviewing procedures

    NASA Astrophysics Data System (ADS)

    Morge, Ludovic

    2015-12-01

    Between 1993 and 2010, two French journals (Aster and Didaskalia) coming from different backgrounds but belonging to the same institution used to publish papers on research in science and technology education. The merging of these journals made it necessary for them to compare the different reviewing procedures used by each. This merging occurred at a time when research is becoming increasingly international which partly determines some of the reviewing procedure choices. In order for a francophone international journal to survive, it needs to take this internationalization into account in a reasoned manner. The author of this article, as a chief editor of RDST (Recherches en Didactique des Sciences et des Technologies)—the journal resulting from the merging- taking part in this merger, analyses the social, cultural and pragmatic determinants which impacted the choices made in reviewing procedures. This paper describes how these diversity of factors leads us to drop the idea of a standard reviewing procedure which would be valid for all journals.

  10. Evaluation of solution procedures for material and/or geometrically nonlinear structural analysis by the direct stiffness method.

    NASA Technical Reports Server (NTRS)

    Stricklin, J. A.; Haisler, W. E.; Von Riesemann, W. A.

    1972-01-01

    This paper presents an assessment of the solution procedures available for the analysis of inelastic and/or large deflection structural behavior. A literature survey is given which summarized the contribution of other researchers in the analysis of structural problems exhibiting material nonlinearities and combined geometric-material nonlinearities. Attention is focused at evaluating the available computation and solution techniques. Each of the solution techniques is developed from a common equation of equilibrium in terms of pseudo forces. The solution procedures are applied to circular plates and shells of revolution in an attempt to compare and evaluate each with respect to computational accuracy, economy, and efficiency. Based on the numerical studies, observations and comments are made with regard to the accuracy and economy of each solution technique.

  11. Scalable histopathological image analysis via active learning.

    PubMed

    Zhu, Yan; Zhang, Shaoting; Liu, Wei; Metaxas, Dimitris N

    2014-01-01

    Training an effective and scalable system for medical image analysis usually requires a large amount of labeled data, which incurs a tremendous annotation burden for pathologists. Recent progress in active learning can alleviate this issue, leading to a great reduction on the labeling cost without sacrificing the predicting accuracy too much. However, most existing active learning methods disregard the "structured information" that may exist in medical images (e.g., data from individual patients), and make a simplifying assumption that unlabeled data is independently and identically distributed. Both may not be suitable for real-world medical images. In this paper, we propose a novel batch-mode active learning method which explores and leverages such structured information in annotations of medical images to enforce diversity among the selected data, therefore maximizing the information gain. We formulate the active learning problem as an adaptive submodular function maximization problem subject to a partition matroid constraint, and further present an efficient greedy algorithm to achieve a good solution with a theoretically proven bound. We demonstrate the efficacy of our algorithm on thousands of histopathological images of breast microscopic tissues. PMID:25320821

  12. Guidelines as rationing tools: a qualitative analysis of psychosocial patient selection criteria for cardiac procedures

    PubMed Central

    Giacomini, Mita K.; Cook, Deborah J.; Streiner, David L.; Anand, Sonia S.

    2001-01-01

    Background Cardiac procedure guidelines often include psychosocial criteria for selecting patients that potentially introduce social value judgements into clinical decisions and decisions about the rationing of care. The aim of this study was to investigate the terms and justifications for and the meanings of psychosocial patient characteristics used in cardiac procedure guidelines. Methods We selected English-language guidelines published since 1990 and chapters in textbooks published since 1989. These guidelines amalgamated multiple sources of evidence and expertise and made recommendations regarding patient selection for specific procedures. A multidisciplinary team of physicians and social scientists extracted passages regarding psychosocial criteria and developed categories and conceptual relationships to describe and interpret their content. Results Sixty-five papers met the criteria for inclusion in the study. Forty-five (69%) mentioned psychosocial criteria as procedure indications or contraindications. The latter fell into several categories, including behavioural and psychological issues, relationships with significant others, financial resources, social roles and environmental circumstances. Interpretation Psychosocial characteristics are portrayed as having 2 roles in patient selection: as risk factors intrinsic to the candidate or as indicators of need for special intervention. Guidelines typically simply list psychosocial contraindications without clarifying their specific nature or providing any justification for their use. Psychosocial considerations can help in the evaluation of patients for cardiac procedures, but they become ethically controversial when used to restrict access. The use of psychosocial indications and contraindications could be improved by more precise descriptions of the psychosocial problem at issue, explanations regarding why the criterion matters and justification of the characteristic using a biological rationale or research

  13. Comparison of abdominal and perineal procedures for complete rectal prolapse: an analysis of 104 patients

    PubMed Central

    Lee, Jong Lyul; Yang, Sung Soo; Park, In Ja; Yu, Chang Sik

    2014-01-01

    Purpose Selecting the best surgical approach for treating complete rectal prolapse involves comparing the operative and functional outcomes of the procedures. The aims of this study were to evaluate and compare the operative and functional outcomes of abdominal and perineal surgical procedures for patients with complete rectal prolapse. Methods A retrospective study of patients with complete rectal prolapse who had operations at a tertiary referral hospital and a university hospital between March 1990 and May 2011 was conducted. Patients were classified according to the type of operation: abdominal procedure (AP) (n = 64) or perineal procedure (PP) (n = 40). The operative outcomes and functional results were assessed. Results The AP group had the younger and more men than the PP group. The AP group had longer operation times than the PP group (165 minutes vs. 70 minutes; P = 0.001) and longer hospital stays (10 days vs. 7 days; P = 0.001), but a lower overall recurrence rate (6.3% vs. 15.0%; P = 0.14). The overall rate of the major complication was similar in the both groups (10.9% vs. 6.8%; P = 0.47). The patients in the AP group complained more frequently of constipation than of incontinence, conversely, in the PP group of incontinence than of constipation. Conclusion The two approaches for treating complete rectal prolapse did not differ with regard to postoperative morbidity, but the overall recurrence tended to occur frequently among patients in the PP group. Functional results after each surgical approach need to be considered for the selection of procedure. PMID:24851226

  14. Calibration of a dynamic analysis procedure using measurements from a North Sea jack-up in a severe storm

    SciTech Connect

    Weaver, T.O.; Brinkmann, C.R.

    1995-12-31

    Field measurements taken from the Maersk Guardian jackup operating in 75m of water in the North Sea are used to calibrate dynamic analysis procedures. These procedures include wave load modeling, damping, and, most importantly, foundation fixity. The measurements demonstrate the existence of foundation fixity during a large storm event. However, the measurements also indicate some nonlinearity occurring during the storm. This paper describes the measurement program, along with a summary of the data recovered. It continues with a comparison of these measurements to time-domain analytical results. These time-domain analyses are based primarily on procedures specified in the Recommended Practice for Site Specific Assessment of Mobile Jackup Units developed as a joint-industry project. Time-domain analysis was used to study various nonlinear foundation models. The load-shedding model was found to directionally produce the best fit to the measurements. It is concluded that neglecting foundation fixity may be conservative in a dynamic analysis of a jack-up. 12 refs., 13 figs., 1 tab.

  15. Analysis of plant gums and saccharide materials in paint samples: comparison of GC-MS analytical procedures and databases

    PubMed Central

    2012-01-01

    Background Saccharide materials have been used for centuries as binding media, to paint, write and illuminate manuscripts and to apply metallic leaf decorations. Although the technical literature often reports on the use of plant gums as binders, actually several other saccharide materials can be encountered in paint samples, not only as major binders, but also as additives. In the literature, there are a variety of analytical procedures that utilize GC-MS to characterize saccharide materials in paint samples, however the chromatographic profiles are often extremely different and it is impossible to compare them and reliably identify the paint binder. Results This paper presents a comparison between two different analytical procedures based on GC-MS for the analysis of saccharide materials in works-of-art. The research presented here evaluates the influence of the analytical procedure used, and how it impacts the sugar profiles obtained from the analysis of paint samples that contain saccharide materials. The procedures have been developed, optimised and systematically used to characterise plant gums at the Getty Conservation Institute in Los Angeles, USA (GCI) and the Department of Chemistry and Industrial Chemistry of the University of Pisa, Italy (DCCI). The main steps of the analytical procedures and their optimisation are discussed. Conclusions The results presented highlight that the two methods give comparable sugar profiles, whether the samples analysed are simple raw materials, pigmented and unpigmented paint replicas, or paint samples collected from hundreds of centuries old polychrome art objects. A common database of sugar profiles of reference materials commonly found in paint samples was thus compiled. The database presents data also from those materials that only contain a minor saccharide fraction. This database highlights how many sources of saccharides can be found in a paint sample, representing an important step forward in the problem of

  16. HPLC-F analysis of melatonin and resveratrol isomers in wine using an SPE procedure.

    PubMed

    Mercolini, Laura; Addolorata Saracino, Maria; Bugamelli, Francesca; Ferranti, Anna; Malaguti, Marco; Hrelia, Silvana; Raggi, Maria Augusta

    2008-04-01

    An original analytical method has been developed for the determination of the antioxidants trans-resveratrol (t-RSV) and cis-resveratrol (c-RSV) and of melatonin (MLT) in red and white wine. The method is based on HPLC coupled to fluorescence detection. Separation was obtained by using a RP column (C8, 150 mm x 4.6 mm id, 5 mum) and a mobile phase composed of 79% aqueous phosphate buffer at pH 3.0 and 21% ACN. Fluorescence intensity was monitored at lambda = 386 nm while exciting at lambda = 298 nm, mirtazapine was used as the internal standard. A careful pretreatment of wine samples was developed, using SPE with C18 cartridges (100 mg, 1 mL). The calibration curves were linear over the following concentration ranges: 0.03-5.00 ng/mL for MLT, 3-500 ng/mL for t-RSV and 1-150 ng/mL for c-RSV. The LOD values were 0.01 ng/mL for MLT, 1 ng/mL for t-RSV and 0.3 ng/mL for c-RSV. Precision data, as well as extraction yield and sample purification results, were satisfactory. Thus, the method seems to be suitable for the analysis of MLT and resveratrol isomers in wine samples. Moreover, wine total polyphenol content and antioxidant activity were evaluated.

  17. Electrodermal activity analysis during affective haptic elicitation.

    PubMed

    Greco, Alberto; Valenza, Gaetano; Nardelli, Mimma; Bianchi, Matteo; Lanata, Antonio; Scilingo, Enzo Pasquale

    2015-08-01

    This paper investigates how the autonomic nervous system dynamics, quantified through the analysis of the electrodermal activity (EDA), is modulated according to affective haptic stimuli. Specifically, a haptic display able to convey caress-like stimuli is presented to 32 healthy subjects (16 female). Each stimulus is changed according to six combinations of three velocities and two forces levels of two motors stretching a strip of fabric. Subjects were also asked to score each stimulus in terms of arousal (high/low activation) and valence (pleasant/unpleasant), in agreement with the circumplex model of affect. EDA was processed using a deconvolutive method, separating tonic and phasic components. A statistical analysis was performed in order to identify significant differences in EDA features among force and velocity levels, as well as in their valence and arousal scores. Results show that the simulated caress induced by the haptic display significantly affects the EDA. In detail, the phasic component seems to be inversely related to the valence score. This finding is new and promising, since it can be used, e.g., as an additional cue for haptics design. PMID:26737605

  18. Final Report for Dynamic Models for Causal Analysis of Panel Data. Alternative Estimation Procedures for Event-History Analysis: A Monte Carlo Study. Part III, Chapter 5.

    ERIC Educational Resources Information Center

    Carroll, Glenn R.; And Others

    This document is part of a series of chapters described in SO 011 759. The chapter examines the merits of four estimators in the causal analysis of event-histories (data giving the number, timing, and sequence of changes in a categorical dependent variable). The four procedures are ordinary least squares, Kaplan-Meier least squares, maximum…

  19. Robustness of two-step acid hydrolysis procedure for composition analysis of poplar.

    PubMed

    Bhagia, Samarthya; Nunez, Angelica; Wyman, Charles E; Kumar, Rajeev

    2016-09-01

    The NREL standard procedure for lignocellulosic biomass composition has two steps: primary hydrolysis in 72% wt sulfuric acid at 30°C for 1h followed by secondary hydrolysis of the slurry in 4wt% acid at 121°C for 1h. Although pointed out in the NREL procedure, the impact of particle size on composition has never been shown. In addition, the effects of primary hydrolysis time and separation of solids prior to secondary hydrolysis on composition have never been shown. Using poplar, it was found that particle sizes less than 0.250mm significantly lowered the glucan content and increased the Klason lignin but did not affect xylan, acetate, or acid soluble lignin contents. Composition was unaffected for primary hydrolysis time between 30 and 90min. Moreover, separating solids prior to secondary hydrolysis had negligible effect on composition suggesting that lignin and polysaccharides are completely separated in the primary hydrolysis stage. PMID:27282557

  20. A component analysis of toilet-training procedures recommended for young children.

    PubMed

    Greer, Brian D; Neidert, Pamela L; Dozier, Claudia L

    2016-03-01

    We evaluated the combined and sequential effects of 3 toilet-training procedures recommended for use with young children: (a) underwear, (b) a dense sit schedule, and (c) differential reinforcement. A total of 20 children participated. Classroom teachers implemented a toilet-training package consisting of all 3 procedures with 6 children. Of the 6 children, 2 showed clear and immediate improvements in toileting performance, and 3 showed delayed improvements. Teachers implemented components of the training package sequentially with 12 children. At least 2 of the 4 children who experienced the underwear component after baseline improved. Toileting performance did not improve for any of the 8 children who were initially exposed to either the dense sit schedule or differential reinforcement. When initial training components were ineffective, teachers implemented additional components sequentially until toileting performance improved or all components were implemented. Toileting performance often improved when underwear or differential reinforcement was later added.

  1. An analysis of the procedural components of supported employment programs associated with employment outcomes.

    PubMed Central

    McDonnell, J; Nofs, D; Hardman, M; Chambless, C

    1989-01-01

    This study examined the relation between the procedural components of supported employment programs and employment outcomes for 120 individuals with disabilities. These individuals were involved in supported employment programs established through the Utah Supported Employment Project. The results suggest that successful implementation of supported employment services led to ongoing employment of study participants in community work sites, increased wages, and ongoing opportunities for workers to interact with nondisabled peers. In addition, several procedural components were found to be strongly associated with successful employment outcomes for workers. Results of the study are discussed in terms of the training needs of supported employment program staff and future research for the dissemination of a cohesive technology of supported employment. PMID:2613600

  2. A MATERIAL COST-MINIMIZATION ANALYSIS FOR HERNIA REPAIRS AND MINOR PROCEDURES DURING A SURGICAL MISSION IN THE DOMINICAN REPUBLIC

    PubMed Central

    Cavallo, Jaime A.; Ousley, Jenny; Barrett, Christopher D.; Baalman, Sara; Ward, Kyle; Borchardt, Malgorzata; Thomas, J. Ross; Perotti, Gary; Frisella, Margaret M.; Matthews, Brent D.

    2013-01-01

    INTRODUCTION Expenditures on material supplies and medications constitute the greatest per capita costs for surgical missions. We hypothesized that supply acquisition at nonprofit organization (NPO) costs would lead to significant cost-savings compared to supply acquisition at US academic institution costs from the provider perspective for hernia repairs and minor procedures during a surgical mission in the Dominican Republic (DR). METHODS Items acquired for a surgical mission were uniquely QR-coded for accurate consumption accounting. Both NPO and US academic institution unit costs were associated with each item in an electronic inventory system. Medication doses were recorded and QR-codes for consumed items were scanned into a record for each sampled procedure. Mean material costs and cost savings ± SDs were calculated in US dollars for each procedure type. Cost-minimization analyses between the NPO and the US academic institution platforms for each procedure type ensued using a two-tailed Wilcoxon matched-pairs test with α=0.05. Item utilization analyses generated lists of most frequently used materials by procedure type. RESULTS The mean cost savings of supply acquisition at NPO costs for each procedure type were as follows: $482.86 ± $683.79 for unilateral inguinal hernia repair (IHR, n=13); $332.46 ± $184.09 for bilateral inguinal hernia repair (BIHR, n=3); $127.26 ± $13.18 for hydrocelectomy (HC, n=9); $232.92 ± $56.49 for femoral hernia repair (FHR, n=3); $120.90 ± $30.51 for umbilical hernia repair (UHR, n=8); $36.59 ± $17.76 for minor procedures (MP, n=26); and $120.66 ± $14.61 for pediatric inguinal hernia repair (PIHR, n=7). CONCLUSION Supply acquisition at NPO costs leads to significant cost-savings compared to supply acquisition at US academic institution costs from the provider perspective for IHR, HC, UHR, MP, and PIHR during a surgical mission to DR. Item utilization analysis can generate minimum-necessary material lists for each procedure

  3. Development of a unified numerical procedure for free vibration analysis of structures

    NASA Technical Reports Server (NTRS)

    Gupta, K. K.

    1981-01-01

    This paper presents the details of a unified numerical algorithm and the associated computer program developed for the efficient determination of natural frequencies and modes of free vibration of structures. Both spinning and nonspinning structures with or without viscous and/or structural damping may be solved by the current procedure; in addition, the program is capable of solving static problems with multiple load cases as well as the quadratic matrix eigenproblem associated with a finite dynamic element structural discretization.

  4. An integrated portfolio optimisation procedure based on data envelopment analysis, artificial bee colony algorithm and genetic programming

    NASA Astrophysics Data System (ADS)

    Hsu, Chih-Ming

    2014-12-01

    Portfolio optimisation is an important issue in the field of investment/financial decision-making and has received considerable attention from both researchers and practitioners. However, besides portfolio optimisation, a complete investment procedure should also include the selection of profitable investment targets and determine the optimal timing for buying/selling the investment targets. In this study, an integrated procedure using data envelopment analysis (DEA), artificial bee colony (ABC) and genetic programming (GP) is proposed to resolve a portfolio optimisation problem. The proposed procedure is evaluated through a case study on investing in stocks in the semiconductor sub-section of the Taiwan stock market for 4 years. The potential average 6-month return on investment of 9.31% from 1 November 2007 to 31 October 2011 indicates that the proposed procedure can be considered a feasible and effective tool for making outstanding investment plans, and thus making profits in the Taiwan stock market. Moreover, it is a strategy that can help investors to make profits even when the overall stock market suffers a loss.

  5. Overview of validation procedures for building energy-analysis simulation codes. [SUNCAT 2. 4, DEROB 4, DOE 2. 1, BLAST

    SciTech Connect

    Wortman, D.; O'Doherty, B.; Judkoff, R.

    1981-03-01

    SERI is developing a procedure for the validation of Building Energy Analysis Simulation Codes (BEAS). These codes are being used increasingly in the building design process, both directly and as the basis for simplified design tools and guidelines. The importance of the validity of the BEAS in predicting building energy performance is obvious when one considers the money and energy which could be wasted by energy-inefficient designs. However, to date, little or no systematic effort has been made to ensure the validity of the various BEAS. The validation work at SERI consists of three distinct parts: Comparative Study, Analytical Verification, and Empirical Validation. The procedures have been developed for the first two parts, and these procedures have been implemented on a sampling of the major BEAS. Results from this work have shown major problems in two of the BEAS tested. Furthermore, when one building design was run on several of the BEAS, there were large differences in the predicted annual heating loads. The empirical validation procedure will be developed when high quality empirical data become available.

  6. Laboratory guidelines and procedures for coal analysis: Volume 1, Assessing the cleanability of fine coal

    SciTech Connect

    Bosold, R.C.; Glessner, D.M.

    1988-05-01

    The conventional laboratory static bath float/sink method of measuring the theoretical limits of coal cleaning is unreliable for ultra-fine (minus 100M topsize) coal particles because of their long and erratic settling rates. Developing a reliable method to assess the theoretical cleanability of ultra-fine coal has been given impetus by the increased emphasis on reducing sulfur dioxide emissions from power plants, greater quantities of fines created by mechanized mining methods, and the development of advanced physical coal cleaning processes that grind coal to ultra-fine sizes in an effort to achieve high coal impurities liberation. EPRI, therefore, commissioned researchers at the Homer City Coal Laboratory in western Pennsylvania to develop and demonstrate a float/sink procedure for ultra-fine sizes. Based on test work performed on two ultra-fine size fractions (100M x 200M and 200M x 0), a detailed laboratory procedure using a centrifugal device was established. Results obtained using the guideline presented in this report are as accurate as those obtained using the static bath float/sink method, and for 200M x 0 material, more accurate. In addition, the centrifugal procedure is faster and less costly than the conventional static bath float/sink method. 12 refs., 32 figs., 1 tab.

  7. Procedural analysis of a new method for determining the Gibbs energy and experimental data on thermodynamic properties of liquid-metal coolants based on alkali metal alloys

    SciTech Connect

    Kagan, D. N. Krechetova, G. A.; Shpil'rain, E. E.

    2010-12-15

    A detailed procedural analysis is given and results of implementation of the new version of the effusion method for determining the Gibbs energy (thermodynamic activity) of binary and ternary systems of alkali metals Cs-Na, K-Na, Cs-K, and Cs-K-Na are presented. The activity is determined using partial pressures of the components measured according the effusion method by the intensity of their atomic beams. The pressure range used in the experiment is intermediate between the Knudsen and hydrodynamic effusion modes. A generalized version of the effusion method involves the pressure range beyond the limits of the applicability of the Hertz-Knudsen equation. Employment of this method provides the differential equation of chemical thermodynamics; solution of this equation makes it possible to construct the Gibbs energy in the range of temperatures 400 {<=} T {<=} 1200 K and concentrations 0 {<=} x{sub i} {<=} 1.

  8. Economics definitions, methods, models, and analysis procedures for Homeland Security applications.

    SciTech Connect

    Ehlen, Mark Andrew; Loose, Verne William; Vargas, Vanessa N.; Smith, Braeton J.; Warren, Drake E.; Downes, Paula Sue; Eidson, Eric D.; Mackey, Greg Edward

    2010-01-01

    This report gives an overview of the types of economic methodologies and models used by Sandia economists in their consequence analysis work for the National Infrastructure Simulation & Analysis Center and other DHS programs. It describes the three primary resolutions at which analysis is conducted (microeconomic, mesoeconomic, and macroeconomic), the tools used at these three levels (from data analysis to internally developed and publicly available tools), and how they are used individually and in concert with each other and other infrastructure tools.

  9. Care staff training in residential homes for managing behavioural and psychological symptoms of dementia based on differential reinforcement procedures of applied behaviour analysis: a process research.

    PubMed

    Noguchi, Dai; Kawano, Yoshiyuki; Yamanaka, Katsuo

    2013-06-01

    Previous studies of care staff training programmes for managing behavioural and psychological symptoms of dementia (BPSD) based on the antecedent-behaviour-consequence analysis of applied behaviour analysis have not included definite intervention strategies. This case study examined the effects of such a programme when combined with differential reinforcement procedures. We examined two female care home residents with dementia of Alzheimer's type. One resident (C) exhibited difficulty in sitting in her seat and made frequent visits to the restroom. The other resident (D) avoided contact with others and insisted on staying in her room. These residents were cared for by 10 care staff trainees. Using an original workbook, we trained the staff regarding the antecedent-behaviour-consequence analysis with differential reinforcement procedures. On the basis of their training, the staff implemented individual care plans for these residents. This study comprised a baseline phase and an intervention phase (IN) to assess the effectiveness of this approach as a process research. One month after IN ended, data for the follow-up phase were collected. In both residents, the overall frequency of the target behaviour of BPSD decreased, whereas the overall rate of engaging in leisure activities as an alternative behaviour increased more during IN than during the baseline phase. In addition, the overall rate of staff actions to support residents' activities increased more during IN than during the baseline phase. However, the frequency of the target behaviour of BPSD gradually increased during IN and the follow-up phase in both residents. Simultaneously, the rate of engaging in leisure activities and the staff's treatment integrity gradually decreased for C. The training programme was effective in decreasing BPSD and increasing prosocial behaviours in these two cases. However, continuous support for the staff is essential for maintaining effects.

  10. Type I error and statistical power of the Mantel-Haenszel procedure for detecting DIF: a meta-analysis.

    PubMed

    Guilera, Georgina; Gómez-Benito, Juana; Hidalgo, Maria Dolores; Sánchez-Meca, Julio

    2013-12-01

    This article presents a meta-analysis of studies investigating the effectiveness of the Mantel-Haenszel (MH) procedure when used to detect differential item functioning (DIF). Studies were located electronically in the main databases, representing the codification of 3,774 different simulation conditions, 1,865 related to Type I error and 1,909 to statistical power. The homogeneity of effect-size distributions was assessed by the Q statistic. The extremely high heterogeneity in both error rates (I² = 94.70) and power (I² = 99.29), due to the fact that numerous studies test the procedure in extreme conditions, means that the main interest of the results lies in explaining the variability in detection rates. One-way analysis of variance was used to determine the effects of each variable on detection rates, showing that the MH test was more effective when purification procedures were used, when the data fitted the Rasch model, when test contamination was below 20%, and with sample sizes above 500. The results imply a series of recommendations for practitioners who wish to study DIF with the MH test. A limitation, one inherent to all meta-analyses, is that not all the possible moderator variables, or the levels of variables, have been explored. This serves to remind us of certain gaps in the scientific literature (i.e., regarding the direction of DIF or variances in ability distribution) and is an aspect that methodologists should consider in future simulation studies. PMID:24127986

  11. The procedures used to review safety analysis reports for packagings submitted to the US Department of Energy for certification

    SciTech Connect

    Popper, G.F.; Raske, D.T.; Turula, P.

    1988-01-01

    This paper presents an overview of the procedures used at the Argonne National Laboratory (ANL) to review Safety Analysis Reports for Packagings (SARPs) submitted to the US Department of Energy (DOE) for issuance of a Certificate of Compliance. Prior to certification and shipment of a packaging for the transport of radioactive materials, a SARP must be prepared describing the design, contents, analyses, testing, and safety features of the packaging. The SARP must be reviewed to ensure that the specific packaging meets all DOE orders and federal regulations for safe transport. The ANL SARP review group provides an independent review and evaluation function for the DOE to ensure that the packaging meets all the prescribed requirements. This review involves many disciplines and includes evaluating the general information, drawings, construction details, operating procedures, maintenance and test programs, and the quality assurance plan for compliance with requirements. 14 refs., 6 figs.

  12. A semi-automated image analysis procedure for in situ plankton imaging systems.

    PubMed

    Bi, Hongsheng; Guo, Zhenhua; Benfield, Mark C; Fan, Chunlei; Ford, Michael; Shahrestani, Suzan; Sieracki, Jeffery M

    2015-01-01

    Plankton imaging systems are capable of providing fine-scale observations that enhance our understanding of key physical and biological processes. However, processing the large volumes of data collected by imaging systems remains a major obstacle for their employment, and existing approaches are designed either for images acquired under laboratory controlled conditions or within clear waters. In the present study, we developed a semi-automated approach to analyze plankton taxa from images acquired by the ZOOplankton VISualization (ZOOVIS) system within turbid estuarine waters, in Chesapeake Bay. When compared to images under laboratory controlled conditions or clear waters, images from highly turbid waters are often of relatively low quality and more variable, due to the large amount of objects and nonlinear illumination within each image. We first customized a segmentation procedure to locate objects within each image and extracted them for classification. A maximally stable extremal regions algorithm was applied to segment large gelatinous zooplankton and an adaptive threshold approach was developed to segment small organisms, such as copepods. Unlike the existing approaches for images acquired from laboratory, controlled conditions or clear waters, the target objects are often the majority class, and the classification can be treated as a multi-class classification problem. We customized a two-level hierarchical classification procedure using support vector machines to classify the target objects (< 5%), and remove the non-target objects (> 95%). First, histograms of oriented gradients feature descriptors were constructed for the segmented objects. In the first step all non-target and target objects were classified into different groups: arrow-like, copepod-like, and gelatinous zooplankton. Each object was passed to a group-specific classifier to remove most non-target objects. After the object was classified, an expert or non-expert then manually removed the

  13. Analysis of lightning field changes during active Florida thunderstorms

    SciTech Connect

    Koshak, W.J.; Krider, E.P. )

    1989-01-20

    A computer algorithm has been developed to derive accurate values of lightning-caused changes in cloud electric fields under active storm conditions. This algorithm has been applied to data obtained from a network of ground-based electric field mills at the NASA Kennedy Space Center and the U.S. Air Force Cape Canaveral Air Force Station during portions of two storms. The resulting field changes have been analyzed using a least squares optimization procedure and point-charge (Q) and point-dipole (P) models. The results indicate that the values and time variations of the Q-model parameters under active storm conditions are similar to those reported previously for small storms when the computations are done with the same analysis criteria and comparable biases. The parameters of P solutions seem to vary with time within the storm interval and from storm to storm. The P vectors at low altitudes all tend to point upward, and those at high altitudes almost always point downward. When a P solution is located in the altitude range corresponding to Q altitudes, the direction of P tends to be horizontal. Since Q solutions typically describe cloud-to-ground lightning and P solutions describe cloud discharges (Maier and Krider, 1986), the altitude dependence of the P vectors is consistent with the classic thunder-cloud charge model that has an excess negative charge at altitudes corresponding to the Q altitudes.

  14. Mercury mass measurement in fluorescent lamps via neutron activation analysis

    NASA Astrophysics Data System (ADS)

    Viererbl, L.; Vinš, M.; Lahodová, Z.; Fuksa, A.; Kučera, J.; Koleška, M.; Voljanskij, A.

    2015-11-01

    Mercury is an essential component of fluorescent lamps. Not all fluorescent lamps are recycled, resulting in contamination of the environment with toxic mercury, making measurement of the mercury mass used in fluorescent lamps important. Mercury mass measurement of lamps via instrumental neutron activation analysis (NAA) was tested under various conditions in the LVR-15 research reactor. Fluorescent lamps were irradiated in different positions in vertical irradiation channels and a horizontal channel in neutron fields with total fluence rates from 3×108 cm-2 s-1 to 1014 cm-2 s-1. The 202Hg(n,γ)203Hg nuclear reaction was used for mercury mass evaluation. Activities of 203Hg and others induced radionuclides were measured via gamma spectrometry with an HPGe detector at various times after irradiation. Standards containing an Hg2Cl2 compound were used to determine mercury mass. Problems arise from the presence of elements with a large effective cross section in luminescent material (europium, antimony and gadolinium) and glass (boron). The paper describes optimization of the NAA procedure in the LVR-15 research reactor with particular attention to influence of neutron self-absorption in fluorescent lamps.

  15. Analysis of physical activities in Taekwondo Pumsae.

    PubMed

    Lee, Sang-Bock; Cha, Eun-Jong; Lee, Tae-Soo

    2008-01-01

    Exercise is very important element for successful aging. Among many sports events, Korea is the suzerain of Taekwondo. When competing (Taekwondo Free Fighting) after learning Poomse as basic movements and inuring them, people compete with movements depending on situation. Among Poomses of Taekwondo, Taegeuk Poomse consists of very basic movements from 1 Jang to 8 Jang and they are for inuring to body. In order to prescribe Taegeuk Jang, which is the basic movement of Taekwondo that Korea is the suzerain, as an exercise for successful aging, it is necessary to analyze physical activity level of each Taegeuk Jang (From 1 Jang through 8 Jang) and suggest the same. Therefore, in this study, I analyzed physical activity level of each Jang of Taegeuk Poomse by attaching Armband made by Body Media Company on brachia and legs below knee of Taekwondo trainees. The result of the analysis of the whole momentum from Taegeuk 1 Jang to 8 Jang is as follows: First, the average absolute value of acceleration variation of vertical direction signal (L-MAD): 5.15. Second, the average absolute value of acceleration variation of horizontal direction signal (T-MAD): 3.44. Finally, the average of calorie consumption per minute (AEE/Min): 5.06 Cal. The obtained result corresponds to proper exercise condition for successful aging and it can be utilized as data for exercise prescription for the young and the old.

  16. Active polarimeter optical system laser hazard analysis.

    SciTech Connect

    Augustoni, Arnold L.

    2005-07-01

    A laser hazard analysis was performed for the SNL Active Polarimeter Optical System based on the ANSI Standard Z136.1-2000, American National Standard for Safe Use of Lasers and the ANSI Standard Z136.6-2000, American National Standard for Safe Use of Lasers Outdoors. The Active Polarimeter Optical System (APOS) uses a pulsed, near-infrared, chromium doped lithium strontium aluminum fluoride (Cr:LiSAF) crystal laser in conjunction with a holographic diffuser and lens to illuminate a scene of interest. The APOS is intended for outdoor operations. The system is mounted on a height adjustable platform (6 feet to 40 feet) and sits atop a tripod that points the beam downward. The beam can be pointed from nadir to as much as 60 degrees off of nadir producing an illuminating spot geometry that can vary from circular (at nadir) to elliptical in shape (off of nadir). The JP Innovations crystal Cr:LiSAF laser parameters are presented in section II. The illuminating laser spot size is variable and can be adjusted by adjusting the separation distance between the lens and the holographic diffuser. The system is adjusted while platform is at the lowest level. The laser spot is adjusted for a particular spot size at a particular distance (elevation) from the laser by adjusting the separation distance (d{sub diffuser}) to predetermined values. The downward pointing angle is also adjusted before the platform is raised to the selected operation elevation.

  17. Analysis of retrotransposon activity in plants.

    PubMed

    Defraia, Christopher; Slotkin, R Keith

    2014-01-01

    Retrotransposons are transposable elements that duplicate themselves by converting their transcribed RNA genome into cDNA, which is then integrated back into the genome. Retrotransposons can be divided into two major classes based on their mechanism of transposition and the presence or absence of long terminal repeats (LTRs). In contrast to mammalian genomes, in which non-LTR retrotransposons have proliferated, plant genomes show evolutionary evidence of an explosion in LTR retrotransposon copy number. These retrotransposons can comprise a large fraction of the genome (75 % in maize). Although often viewed as molecular parasites, retrotransposons have been shown to influence neighboring gene expression and play a structural and potential regulatory role in the centromere. To prevent retrotransposon activity, eukaryotic cells have evolved overlapping mechanisms to repress transposition. Plants are an excellent system for studying the mechanisms of LTR retrotransposon inhibition such as DNA methylation and small RNA-mediated degradation of retrotransposon transcripts. However, analysis of these multi-copy, mobile elements is considerably more difficult than analysis of single-copy genes located in stable regions of the genome. In this chapter we outline methods for analyzing the progress of LTR retrotransposons through their replication cycle in plants. We describe a mixture of traditional molecular biology experiments, such as Southern, Northern, and Western blotting, in addition to nontraditional techniques designed to take advantage of the specific mechanism of LTR retrotransposition.

  18. Statistical procedures for the design and analysis of in vitro mutagenesis assays

    SciTech Connect

    Kaldor, J.

    1983-03-01

    In previous statistical treatments of a certain class of mutagenesis assays, stochastic models of mutation and cell growth have not been utilized. In this paper, we review the assumptions under which these models are derived, introduce some further assumptions, and propose ways to estimate and test hypotheses regarding the parameters of the models from assay data. It is shown via simulation and exact calculation that if the models are valid, the proposed statistical procedures provide very accurate Type I error rates for hypothesis tests, and coverage probabilities for confidence intervals. The cases of a linear dose response relationship for mutagenesis, and a comparison of a set of treated cell cultures with a set of control cultures are treated in detail. Approximate power functions for hypothesis tests of interest are then derived, and these are also shown to be satisfactorily close to the true power functions. The approximations are used to develop guidelines for planning aspects of a mutagenesis assay, including the number, spacing and range of dose levels employed. Examples of applications of the procedures are provided, and the paper concludes with a discussion of future statistical work which may be carried out in the area of mutagenesis assays. 38 references, 8 figures, 7 tables.

  19. A sensitive flow-based procedure for spectrophotometric speciation analysis of inorganic bromine in waters.

    PubMed

    Rocha, Diogo L; Machado, Marcos C; Melchert, Wanessa R

    2014-11-01

    A flow-based system with solenoid micro-pumps and long path-length spectrophotometry for bromate and bromide determination in drinking water is proposed. The method is based on the formation of an unstable dye from the reaction between bromate, 2-(5-dibromo-2-pyridylazo)-5-(diethylamino)phenol (5-Br-PADAP) and thiocyanate ions. A multivariate optimization was carried out. A linear response was observed between 5.0 and 100 µg L(-1) BrO3(-) and the detection limit was estimated as 2.0 µg L(-1) (99.7% confidence level). The coefficient of variation (n=20) and sampling rate were estimated as 1.0% and 40 determinations per hour, respectively. Reagent consumption was estimated as 0.17 µg of 5-Br-PADAP and 230 μg of NaSCN per measurement, generating 6.0 mL of waste. Bromide determination was carried out after UV-assisted conversion with K2S2O8 using 300 µL of sample within the range 20-400 µg L(-1) Br(-). The generated bromate was then determined by the proposed flow system. The results for tap and commercial mineral water samples agreed with those obtained with the reference procedure at the 95% confidence level. The proposed procedure is therefore a sensitive, environmentally friendly and reliable alternative for inorganic bromine speciation.

  20. Analysis of rehabilitation procedure following arthroplasty of the knee with the use of complete endoprosthesis

    PubMed Central

    Wilk-Frańczuk, Magdalena; Tomaszewski, Wiesław; Zemła, Jerzy; Noga, Henryk; Czamara, Andrzej

    2011-01-01

    Summary Background The use of endoprosthesis in arthroplasty requires adaptation of rehabilitation procedures in order to reinstate the correct model of gait, which enables the patient to recover independence and full functionality in everyday life, which in turn results in an improvement in the quality of life. Material/Methods We studied 33 patients following an initial total arthroplasty of the knee involving endoprosthesis. The patients were divided into two groups according to age. The range of movement within the knee joints was measured for all patients, along with muscle strength and the subjective sensation of pain on a VAS, and the time required to complete the ‘up and go’ test was measured. The gait model and movement ability were evaluated. The testing was conducted at baseline and after completion of the rehabilitation exercise cycle. Results No significant differences were noted between the groups in the tests of the range of movement in the operated joint or muscle strength acting on the knee joint. Muscle strength was similar in both groups. In the “up and go” task the time needed to complete the test was 2.9 seconds shorter after rehabilitation in Group 1 (average age 60.4), and 4.5 seconds shorter in Group 2 (average age 73.1)). Conclusions The physiotherapy procedures we applied, following arthroplasty of the knee with cemented endoprosthesis, brought about good results in both research groups of older patients. PMID:21358604

  1. Finite element analysis of donning procedure of a prosthetic transfemoral socket.

    PubMed

    Lacroix, Damien; Patiño, Juan Fernando Ramírez

    2011-12-01

    Lower limb amputation is a severe psychological and physical event in a patient. A prosthetic solution can be provided but should respond to a patient-specific need to accommodate for the geometrical and biomechanical specificities. A new approach to calculate the stress-strain state at the interaction between the socket and the stump of five transfemoral amputees is presented. In this study the socket donning procedure is modeled using an explicit finite element method based on the patient-specific geometry obtained from CT and laser scan data. Over stumps the mean maximum pressure is 4 kPa (SD 1.7) and the mean maximum shear stresses are 1.4 kPa (SD 0.6) and 0.6 kPa (SD 0.3) in longitudinal and circumferential directions, respectively. Locations of the maximum values are according to pressure zones at the sockets. The stress-strain states obtained in this study can be considered more reliable than others, since there are normal and tangential stresses associated to the socket donning procedure.

  2. A Cut-Based Procedure For Document-Layout Modelling And Automatic Document Analysis

    NASA Astrophysics Data System (ADS)

    Dengel, Andreas R.

    1989-03-01

    With the growing degree of office automation and the decreasing costs of storage devices, it becomes more and more attractive to store optically scanned documents like letters or reports in an electronic form. Therefore the need of a good paper-computer interface becomes increasingly important. This interface must convert paper documents into an electronic representation that not only captures their contents, but also their layout and logical structure. We propose a procedure to describe the layout of a document page by dividing it recursively into nested rectangular areas. A semantic meaning to each one will be assigned by means of logical labels. The procedure is used as a basis for modelling a hierarchical document layout onto the semantic meaning of the parts in the document. We analyse the layout of a document using a best-first search in this tesselation structure. The search is directed by a measure of similarity between the layout pattern in the model and the layout of the actual document. The validity of a hypothesis for the semantic labelling of a layout block can then be verified. It either supports the hypothesis or initiates the generation of a new one. The method has been implemented in Common Lisp on a SUN 3/60 Workstation and has run for a large population of office docu-ments. The results obtained have been very encouraging and have convincingly confirmed the soundness of the approach.

  3. Physics faculty beliefs and values about the teaching and learning of problem solving. II. Procedures for measurement and analysis

    NASA Astrophysics Data System (ADS)

    Henderson, Charles; Yerushalmi, Edit; Kuo, Vince H.; Heller, Kenneth; Heller, Patricia

    2007-12-01

    To identify and describe the basis upon which instructors make curricular and pedagogical decisions, we have developed an artifact-based interview and an analysis technique based on multilayered concept maps. The policy capturing technique used in the interview asks instructors to make judgments about concrete instructional artifacts similar to those they likely encounter in their teaching environment. The analysis procedure alternatively employs both an a priori systems view analysis and an emergent categorization to construct a multilayered concept map, which is a hierarchically arranged set of concept maps where child maps include more details than parent maps. Although our goal was to develop a model of physics faculty beliefs about the teaching and learning of problem solving in the context of an introductory calculus-based physics course, the techniques described here are applicable to a variety of situations in which instructors make decisions that influence teaching and learning.

  4. An iterative sensory procedure to select odor-active associations in complex consortia of microorganisms: application to the construction of a cheese model.

    PubMed

    Bonaïti, C; Irlinger, F; Spinnler, H E; Engel, E

    2005-05-01

    The aim of this study was to develop and validate an iterative procedure based on odor assessment to select odor-active associations of microorganisms from a starting association of 82 strains (G1), which were chosen to be representative of Livarot cheese biodiversity. A 3-step dichotomous procedure was applied to reduce the starting association G1. At each step, 3 methods were used to evaluate the odor proximity between mother (n strains) and daughter (n/2 strains) associations: a direct assessment of odor dissimilarity using an original bidimensional scale system and 2 indirect methods based on comparisons of odor profile or hedonic scores. Odor dissimilarity ratings and odor profile gave reliable and sometimes complementary criteria to select G3 and G4 at the first iteration, G31 and G42 at the second iteration, and G312 and G421 at the final iteration. Principal component analysis of odor profile data permitted the interpretation at least in part, of the 2D multidimensional scaling representation of the similarity data. The second part of the study was dedicated to 1) validating the choice of the dichotomous procedure made at each iteration, and 2) evaluating together the magnitude of odor differences that may exist between G1 and its subsequent simplified associations. The strategy consisted of assessing odor similarity between the 13 cheese models by comparing the contents of their odor-active compounds. By using a purge-and-trap gas chromatography-olfactory/mass spectrometry device, 50 potent odorants were identified in models G312, G421, and in a typical Protected Denomination of Origin Livarot cheese. Their contributions to the odor profile of both selected model cheeses are discussed. These compounds were quantified by purge and trap-gas chromatography-mass spectrometry in the 13 products and the normalized data matrix was transformed to a between-product distance matrix. This instrumental assessment of odor similarities allowed validation of the choice

  5. Systems Analysis in Designing Toilet Training Procedures for Developmentally Disabled Persons.

    ERIC Educational Resources Information Center

    Brooking, Emerson D.; Anderson, Dana M.

    The use of systems analysis may help child developmental specialists improve the success rates of toilet training programs with developmentally disabled children. Such a systems analysis includes the sociocultural, family, and/or individual ecosystems of the individual. Two detailed case studies of mentally retarded elementary school age children…

  6. Using Functional Analysis Procedures To Monitor Medication Effects in an Outpatient and School Setting.

    ERIC Educational Resources Information Center

    Anderson, Mark T.; Vu, Chau; Derby, K. Mark; Goris, Mary; McLaughlin, T. F.

    2002-01-01

    Functional analysis methods were used to monitor medication used to reduce vocal and physical tics of a child with Tourettes Syndrome. Post-medication results demonstrated a reduced level of tics by the participant. Although preliminary, the findings suggest that functional analysis methods can be used to monitor the effects of medication in…

  7. Procedures for Conducting a Job Analysis: A Manual for the COMTASK Database.

    ERIC Educational Resources Information Center

    Alvic, Fadia M.; Newkirk-Moore, Susan

    This manual provides the information needed to conduct a job analysis and to enter or update job analysis information in the Computerized Task Inventory (COMTASK) database. Chapter I presents the purpose and organization of the manual. The second chapter provides a brief background on the purpose and design of COMTASK, a definition of terms used…

  8. Assessing physical activity intensity by video analysis.

    PubMed

    Silva, P; Santiago, C; Reis, L P; Sousa, A; Mota, J; Welk, G

    2015-05-01

    Assessing physical activity (PA) is a challenging task and many different approaches have been proposed. Direct observation (DO) techniques can objectively code both the behavior and the context in which it occurred, however, they have significant limitations such as the cost and burden associated with collecting and processing data. Therefore, this study evaluated the utility of an automated video analysis system (CAM) designed to record and discriminate the intensity of PA using a subject tracking methodology. The relative utility of the CAM system and DO were compared with criterion data from an objective accelerometry-based device (Actigraph GT3X+). Eight 10 year old children (three girls and five boys) wore the GT3X+ during a standard basketball session. PA was analyzed by two observers using the SOPLAY instrument and by the CAM system. The GT3X+ and the CAM were both set up to collect data at 30 Hz while the DO was performed every two minutes, with 10 s of observation for each gender. The GT3X+ was processed using cut points by Evanson and the outcome measure was the percentage of time spent in different intensities of PA. The CAM data were processed similarly using the same speed thresholds as were used in establishing the Evenson cut-off points (light: <2 mph; walking: 2-4 mph; very active: >4 mph). Similar outcomes were computed from the SOPLAY default analyses. A chi-square test was used to test differences in the percentage of time at the three intensity zones (light, walking and very active). The Yates' correction was used to prevent overestimation of statistical significance for small data. When compared with GT3X+, the CAM had better results than the SOPLAY. The chi-square test yielded the following pairwise comparisons: CAM versus GT3x+ was χ(2) (5) = 24.18, p < .001; SOPLAY2 versus GT3x+ was χ(2) (5) = 144.44, p < .001; SOPLAY1 versus GT3x+ was χ(2) (5) = 119.55, p < .001. The differences were smaller between CAM and GT3x

  9. Integrated Data Collection Analysis (IDCA) Program - Mixing Procedures and Materials Compatibility

    SciTech Connect

    Olinger, Becky D.; Sandstrom, Mary M.; Warner, Kirstin F.; Sorensen, Daniel N.; Remmers, Daniel L.; Moran, Jesse S.; Shelley, Timothy J.; Whinnery, LeRoy L.; Hsu, Peter C.; Whipple, Richard E.; Kashgarian, Michaele; Reynolds, John G.

    2011-01-14

    Three mixing procedures have been standardized for the IDCA proficiency test—solid-solid, solid-liquid, and liquid-liquid. Due to the variety of precursors used in formulating the materials for the test, these three mixing methods have been designed to address all combinations of materials. Hand mixing is recommended for quantities less than 10 grams and Jar Mill mixing is recommended for quantities over 10 grams. Consideration must also be given to the type of container used for the mixing due to the wide range of chemical reactivity of the precursors and mixtures. Eight web site sources from container and chemical manufacturers have been consulted. Compatible materials have been compiled as a resource for selecting containers made of materials stable to the mixtures. In addition, container materials used in practice by the participating laboratories are discussed. Consulting chemical compatibility tables is highly recommended for each operation by each individual engaged in testing the materials in this proficiency test.

  10. Development and Analysis of Psychomotor Skills Metrics for Procedural Skills Decay.

    PubMed

    Parthiban, Chembian; Ray, Rebecca; Rutherford, Drew; Zinn, Mike; Pugh, Carla

    2016-01-01

    In this paper we develop and analyze the metrics associated with a force production task involving a stationary target with the help of advanced VR and Force Dimension Omega 6 haptic device. We study the effects of force magnitude and direction on the various metrics namely path length, movement smoothness, velocity and acceleration patterns, reaction time and overall error in achieving the target. Data was collected from 47 participants who were residents. Results show a positive correlation between the maximum force applied and the deflection error, velocity while reducing the path length and increasing smoothness with a force of higher magnitude showing the stabilizing characteristics of higher magnitude forces. This approach paves a way to assess and model procedural skills decay.

  11. [Analysis of reconstruction procedures for defects of the mouth floor. Report of 96 cases].

    PubMed

    Michelet, V; Duroux, S; Majoufre, C; Caix, P; Siberchicot, F

    1996-12-01

    The aim of this study was to assess the reconstruction of floor of the mouth defects after cancer surgery. The medical records of 140 patients treated between January 1st, 1987 and December 31st, 1995 were reviewed. Ninety-six patients had primary reconstruction: there were 82 cutaneous or osteomyocutaneous flaps and 14 microsurgical transfers. Among these patients 15 had titanium mandibular reconstruction plates. The reconstruction procedures and postoperative follow-up were evaluated. Healing by first intention is appropriate for superficial soft tissue defects. The nasolabial flap is used only for small mucosal defects. A forearm flap should be the first choice treatment for large soft tissue defects owing to its plasticity and reliable vessels. Segmental mandibular resections often imply mandibular reconstruction. Titanium plates may be used alone or with a cutaneous flap. Tolerance of plates after radiotherapy is very good and they are an effective method of reconstruction for fragile patients. PMID:9768172

  12. Development and Analysis of Psychomotor Skills Metrics for Procedural Skills Decay.

    PubMed

    Parthiban, Chembian; Ray, Rebecca; Rutherford, Drew; Zinn, Mike; Pugh, Carla

    2016-01-01

    In this paper we develop and analyze the metrics associated with a force production task involving a stationary target with the help of advanced VR and Force Dimension Omega 6 haptic device. We study the effects of force magnitude and direction on the various metrics namely path length, movement smoothness, velocity and acceleration patterns, reaction time and overall error in achieving the target. Data was collected from 47 participants who were residents. Results show a positive correlation between the maximum force applied and the deflection error, velocity while reducing the path length and increasing smoothness with a force of higher magnitude showing the stabilizing characteristics of higher magnitude forces. This approach paves a way to assess and model procedural skills decay. PMID:27046593

  13. 31 CFR 1020.520 - Special information sharing procedures to deter money laundering and terrorist activity for banks.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Finance: Treasury Regulations Relating to Money and Finance (Continued) FINANCIAL CRIMES ENFORCEMENT NETWORK, DEPARTMENT OF THE TREASURY RULE FOR BANKS Special Information Sharing Procedures To Deter...

  14. 31 CFR 1020.520 - Special information sharing procedures to deter money laundering and terrorist activity for banks.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Finance: Treasury Regulations Relating to Money and Finance (Continued) FINANCIAL CRIMES ENFORCEMENT NETWORK, DEPARTMENT OF THE TREASURY RULE FOR BANKS Special Information Sharing Procedures To Deter...

  15. 31 CFR 1020.520 - Special information sharing procedures to deter money laundering and terrorist activity for banks.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Finance: Treasury Regulations Relating to Money and Finance (Continued) FINANCIAL CRIMES ENFORCEMENT NETWORK, DEPARTMENT OF THE TREASURY RULE FOR BANKS Special Information Sharing Procedures To Deter...

  16. 31 CFR 1020.520 - Special information sharing procedures to deter money laundering and terrorist activity for banks.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Finance: Treasury Regulations Relating to Money and Finance (Continued) FINANCIAL CRIMES ENFORCEMENT NETWORK, DEPARTMENT OF THE TREASURY RULE FOR BANKS Special Information Sharing Procedures To Deter...

  17. Effective dose analysis of three-dimensional rotational angiography during catheter ablation procedures

    NASA Astrophysics Data System (ADS)

    Wielandts, J.-Y.; Smans, K.; Ector, J.; De Buck, S.; Heidbüchel, H.; Bosmans, H.

    2010-02-01

    There is increasing use of three-dimensional rotational angiography (3DRA) during cardiac ablation procedures. As compared with 2D angiography, a large series of images are acquired, creating the potential for high radiation doses. The aim of the present study was to quantify patient-specific effective doses. In this study, we developed a computer model to accurately calculate organ doses and the effective dose incurred during 3DRA image acquisition. The computer model simulates the exposure geometry and uses the actual exposure parameters, including the variation in tube voltage and current that is realized through the automatic exposure control (AEC). We performed 3DRA dose calculations in 42 patients referred for ablation on the Siemens Axiom Artis DynaCT system (Erlangen, Germany). Organ doses and effective dose were calculated separately for all projections in the course of the C-arm rotation. The influence of patient body mass index (BMI), dose-area product (DAP), collimation and dose per frame (DPF) rate setting on the calculated doses was also analysed. The effective dose was found to be 5.5 ± 1.4 mSv according to ICRP 60 and 6.6 ± 1.8 mSv according to ICRP 103. Effective dose showed an inversely proportional relationship to BMI, while DAP was nearly BMI independent. No simple conversion coefficient between DAP and effective dose could be derived. DPF reduction did not result in a proportional effective dose decrease. These paradoxical findings were explained by the settings of the AEC and the limitations of the x-ray tube. Collimation reduced the effective dose by more than 20%. Three-dimensional rotational angiography is associated with a definite but acceptable radiation dose that can be calculated for all patients separately. Their BMI is a predictor of the effective dose. The dose reduction achieved with collimation suggests that its use is imperative during the 3DRA procedure.

  18. Beef, chicken and lamb fatty acid analysis--a simplified direct bimethylation procedure using freeze-dried material.

    PubMed

    Lee, M R F; Tweed, J K S; Kim, E J; Scollan, N D

    2012-12-01

    When fractionation of meat lipids is not required, procedures such as saponification can be used to extract total fatty acids, reducing reliance on toxic organic compounds. However, saponification of muscle fatty acids is laborious, and requires extended heating times, and a second methylation step to convert the extracted fatty acids to fatty acid methyl esters prior to gas chromatography. Therefore the development of a more rapid direct methylation procedure would be of merit. The use of freeze-dried material for analysis is common and allows for greater homogenisation of the sample. The present study investigated the potential of using freeze-dried muscle samples and a direct bimethylation to analyse total fatty acids of meat (beef, chicken and lamb) in comparison with a saponification procedure followed by bimethylation. Both methods compared favourably for all major fatty acids measured. There was a minor difference in relation to the C18:1 trans 10 isomer with a greater (P<0.05) recovery with saponification. However, numerically the difference was small and likely as a result of approaching the limits of isomer identification by single column gas chromatography. Differences (P<0.001) between species were found for all fatty acids measured with no interaction effects. The described technique offers a simplified, quick and reliable alternative to saponification to analyse total fatty acids from muscle samples.

  19. Analysis of declarative and procedural knowledge in volleyball according to the level of practice and players' age.

    PubMed

    Gil, Alexander; Moreno, M Perla; García-González, Luís; Moreno, Alberto; del Villar, Fernando

    2012-10-01

    The main objective of the research was to analyse the cognitive expertise of volleyball players, according to their level of practice and age, as well as to verify the existing difference in the knowledge of individuals of the same age but with different levels of practice. The study sample was comprised of 535 individuals ages 12 to 16 years. The independent variables were the level of practice, i.e., playing category in training and in competition (Under-14 and Under-16), and the age. The dependent variables were declarative knowledge and procedural knowledge. An analysis of variance was performed to examine the influence of the level of practice on the declarative knowledge and procedural knowledge of the volleyball players in training stages. There were significant differences both in declarative knowledge and in procedural knowledge according to level of practice. Significant differences were also observed between consecutive ages at different levels of practice. These results show that the level of practice in training and competition is a more relevant factor than the change of age in development of specific knowledge of the sport. PMID:23265024

  20. Development of mixed time partition procedures for thermal analysis of structures

    NASA Technical Reports Server (NTRS)

    Liu, W. K.

    1982-01-01

    The computational methods used to predict and optimize the thermal-structural behavior of aerospace vehicle structures are reviewed. In general, two classes of algorithms, implicit and explicit, are used in transient thermal analysis of structures. Each of these two methods has its own merits. Due to the different time scales of the mechanical and thermal responses, the selection of a time integration method can be a difficult yet critical factor in the efficient solution of such problems. Therefore mixed time integration methods for transient thermal analysis of structures are being developed. This proposed methodology would be readily adaptable to existing computer programs for structural thermal analysis.

  1. Tests of an alternate mobile transporter and extravehicular activity assembly procedure for the Space Station Freedom truss

    NASA Technical Reports Server (NTRS)

    Heard, Walter L., Jr.; Watson, Judith J.; Lake, Mark S.; Bush, Harold G.; Jensen, J. Kermit; Wallsom, Richard E.; Phelps, James E.

    1992-01-01

    Results are presented from a ground test program of an alternate mobile transporter (MT) concept and extravehicular activity (EVA) assembly procedure for the Space Station Freedom (SSF) truss keel. A three-bay orthogonal tetrahedral truss beam consisting of 44 2-in-diameter struts and 16 nodes was assembled repeatedly in neutral buoyancy by pairs of pressure-suited test subjects working from astronaut positioning devices (APD's) on the MT. The truss bays were cubic with edges 15 ft long. All the truss joint hardware was found to be EVA compatible. The average unit assembly time for a single pair of experienced test subjects was 27.6 sec/strut, which is about half the time derived from other SSF truss assembly tests. A concept for integration of utility trays during truss assembly is introduced and demonstrated in the assembly tests. The concept, which requires minimal EVA handling of the trays, is shown to have little impact on overall assembly time. The results of these tests indicate that by using an MT equipped with APD's, rapid EVA assembly of a space station-size truss structure can be expected.

  2. Mutagenicity of selected sulfonated azo dyes in the Salmonella/microsome assay: use of aerobic and anaerobic activation procedures.

    PubMed

    Brown, J P; Dietrich, P S

    1983-03-01

    A selection of 16 sulfonated azo dyes of both the monoazo type and diazo dyes based on benzidine, o-tolidine and o-dianisidine were assayed for mutagenicity in Salmonella typhimurium strains TA98 and TA100 employing both aerobic and anaerobic preincubation procedures. 3 food dyes, FD & C Red No. 40 and Yellows No. 5 and No. 6 were non-mutagenic in all tests. 5 dyes were mutagenic with aerobic treatment (trypan blue, Pontacyl Sky Blue 4BX, Congo Red, Eriochrome Blue Black B, dimethylaminoazobenzene) and 6 were mutagenic aerobically with riboflavin and cofactors (Deltapurpurin, trypan blue, Pontacyl Sky Blue 4BX, Congo Red, methyl orange, Ponceau 3R). Anaerobic preincubation involving enzymatic reduction of the dyes led to a different pattern of mutagenicity, with trypan blue giving much enhanced mutagenicity; Eriochrome Blue Black B, Pontacyl Sky Blue 4BX, Deltapurpurin and Congo Red exhibiting similar activity to aerobic preincubation; and methyl orange and Ponceau 3R yielding no mutagenicity. The results are interpreted with respect to an hypothesis involving partial reduction of the azo bond under differing degrees of aerobiosis via azo-anion radicals and hydrazo intermediates.

  3. Incidence of adverse events in paediatric procedural sedation in the emergency department: a systematic review and meta-analysis

    PubMed Central

    Bellolio, M Fernanda; Puls, Henrique A; Anderson, Jana L; Gilani, Waqas I; Murad, M Hassan; Barrionuevo, Patricia; Erwin, Patricia J; Wang, Zhen; Hess, Erik P

    2016-01-01

    Objective and design We conducted a systematic review and meta-analysis to evaluate the incidence of adverse events in the emergency department (ED) during procedural sedation in the paediatric population. Randomised controlled trials and observational studies from the past 10 years were included. We adhere to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement. Setting ED. Participants Children. Interventions Procedural sedation. Outcomes Adverse events like vomiting, agitation, hypoxia and apnoea. Meta-analysis was performed with random-effects model and reported as incidence rates with 95% CIs. Results A total of 1177 studies were retrieved for screening and 258 were selected for full-text review. 41 studies reporting on 13 883 procedural sedations in 13 876 children (≤18 years) were included. The most common adverse events (all reported per 1000 sedations) were: vomiting 55.5 (CI 45.2 to 65.8), agitation 17.9 (CI 12.2 to 23.7), hypoxia 14.8 (CI 10.2 to 19.3) and apnoea 7.1 (CI 3.2 to 11.0). The need to intervene with either bag valve mask, oral airway or positive pressure ventilation occurred in 5.0 per 1000 sedations (CI 2.3 to 7.6). The incidences of severe respiratory events were: 34 cases of laryngospasm among 8687 sedations (2.9 per 1000 sedations, CI 1.1 to 4.7; absolute rate 3.9 per 1000 sedations), 4 intubations among 9136 sedations and 0 cases of aspiration among 3326 sedations. 33 of the 34 cases of laryngospasm occurred in patients who received ketamine. Conclusions Serious adverse respiratory events are very rare in paediatric procedural sedation in the ED. Emesis and agitation are the most frequent adverse events. Hypoxia, a late indicator of respiratory depression, occurs in 1.5% of sedations. Laryngospasm, though rare, happens most frequently with ketamine. The results of this study provide quantitative risk estimates to facilitate shared decision-making, risk communication, informed consent and

  4. Nickel-catalyzed proton-deuterium exchange (HDX) procedures for glycosidic linkage analysis of complex carbohydrates

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The structural analysis of complex carbohydrates typically requires the assignment of three parameters: monosaccharide composition, the position of glycosidic linkages between monosaccharides, and the position and nature of non-carbohydrate substituents. The glycosidic linkage positions are often de...

  5. Robotic right colectomy: A worthwhile procedure? Results of a meta-analysis of trials comparing robotic versus laparoscopic right colectomy

    PubMed Central

    Petrucciani, Niccolò; Sirimarco, Dario; Nigri, Giuseppe R.; Magistri, Paolo; La Torre, Marco; Aurello, Paolo; D’Angelo, Francesco; Ramacciato, Giovanni

    2015-01-01

    BACKGROUND: Robotic right colectomy (RRC) is a complex procedure, offered to selected patients at institutions highly experienced with the procedure. It is still not clear if this approach is worthwhile in enhancing patient recovery and reducing post-operative complications, compared with laparoscopic right colectomy (LRC). Literature is still fragmented and no meta-analyses have been conducted to compare the two procedures. This work aims at reducing this gap in literature, in order to draw some preliminary conclusions on the differences and similarities between RRC and LRC, focusing on short-term outcomes. MATERIALS AND METHODS: A systematic literature review was conducted to identify studies comparing RRC and LRC, and meta-analysis was performed using a random-effects model. Peri-operative outcomes (e.g., morbidity, mortality, anastomotic leakage rates, blood loss, operative time) constituted the study end points. RESULTS: Six studies, including 168 patients undergoing RRC and 348 patients undergoing LRC were considered as suitable. The patients in the two groups were similar with respect to sex, body mass index, presence of malignant disease, previous abdominal surgery, and different with respect to age and American Society of Anesthesiologists score. There were no statistically significant differences between RRC and LRC regarding estimated blood loss, rate of conversion to open surgery, number of retrieved lymph nodes, development of anastomotic leakage and other complications, overall morbidity, rates of reoperation, overall mortality, hospital stays. RRC resulted in significantly longer operative time. CONCLUSIONS: The RRC procedure is feasible, safe, and effective in selected patients. However, operative times are longer comparing to LRC and no advantages in peri-operative and post-operative outcomes are demonstrated with the use of the robotic surgical system. PMID:25598595

  6. A Digital Elevation Model for Seaside, Oregon: Procedures, Data Sources, and Analysis

    NASA Astrophysics Data System (ADS)

    Venturato, A. J.

    2004-12-01

    As part of a pilot study to modernize Flood Insurance Rate Maps for the Federal Emergency Management Agency (FEMA), a digital elevation model (DEM) was developed for the purpose of modeling tsunami inundation for Seaside, Oregon. The DEM consists of elevation data values with a horizontal grid spacing of 1/3 arc seconds, or approximately 10 meters. The DEM was generated from several topographic and bathymetric data sources, requiring significant processing challenges. These challenges included conversion to a single specified projection, units, horizontal datum, and vertical datum; analysis and removal of errant data from hydrographic, topographic, and LIDAR surveys; and a point-by-point analysis of overlapping data sources. Data were collected from the National Oceanic and Atmospheric Administration National Ocean Service and National Geophysical Data Center, the U.S. Geological Survey, the Oregon Geospatial Data Center, the University of Oregon, and the Oregon Department of Geology and Mineral Industries. Data were converted into formats compatible with ESRI ArcGIS 3.3 software. ArcGIS was used for spatial analysis, error correction, and surface grid development using triangular irregular networking. Post-processing involved a consistency analysis and comparison with original data and control data sources. The final DEM was compared with a previous DEM developed for tsunami inundation modeling in 1997. Significant shoreline differences were found between the DEMs, resulting in an analysis of the shoreline changes around the mouth of the Necanicum River. The shoreline analysis includes a spatial analysis of digital orthophotos over the recent past and a review of historical accretion and erosion rates along the Columbia River littoral cell.

  7. Complications of percutaneous vertebroplasty: An analysis of 1100 procedures performed in 616 patients.

    PubMed

    Saracen, Agnieszka; Kotwica, Zbigniew

    2016-06-01

    Percutaneous vertebroplasty (PVP) is a minimally invasive procedure widely used for the treatment of pain due to vertebral fractures of different origins-osteoporotic, traumatic, or neoplastic. PVP is minimally invasive, but the complications are not rare; however, they are in most cases not significant clinically. The most frequent is cement leakage, which can occur onto veins, paravertebral soft tissue, into the intervertebral disk, or to the spinal canal, affecting foraminal area or epidural space. We analyzed results of treatment and complications of vertebroplasty performed with the use of polimethylomethylacrylate cement (PMMA) on 1100 vertebrae, with a special regard to the severity of complication and eventual clinical manifestation. One thousand one hundred PVP were analyzed, performed in 616 patients. There were 468 (76%) women and 148 men (24%), 24 to 94-year old, mean age 68 years. From 1100 procedures, 794 treated osteporotic and 137 fractures due to malignant disease, 69 PVP were made in traumatic fractures. One hundred patients had painful vertebral hemangiomas. Seven hundred twenty-six (66%) lesions were in thoracic, and 374 (34%) in lumbar area. Results of treatment were assessed using 10 cm Visual Analogue Scale (VAS) 12 hours after surgery, 7 days, 30 days, and then each 6 months, up to 3 years. Before surgery all patients had significant pain 7 to 10 in VAS scale, mean 8.9 cm. Twelve  hours after surgery 602 (97.7%) reported significant relief of pain, with mean VAS of 2,3 cm. Local complications occurred in 50% of osteoporotic, 34% of neoplastic, 16% of traumatic fractures, and 2% of vertebral hemangiomas. The most common was PMMA leakage into surrounding tissues-20%; paravertebral vein embolism-13%; intradiscal leakage-8%; and PMMA leakage into the spinal canal-0.8%. Results of treatment did not differ between patients with and without any complications. From 104 patients who had chest X-ray or CT study performed after surgery

  8. Nickel-Catalyzed Proton-Deuterium Exchange (HDX) Procedures for Glycosidic Linkage Analysis of Complex Carbohydrates.

    PubMed

    Price, Neil P J; Hartman, Trina M; Vermillion, Karl E

    2015-07-21

    The structural analysis of complex carbohydrates typically requires the assignment of three parameters: monosaccharide composition, the position of glycosidic linkages between monosaccharides, and the position and nature of noncarbohydrate substituents. The glycosidic linkage positions are often determined by permethylation analysis, but this can be complicated by high viscosity or poor solubility, resulting in under-methylation. This is a drawback because an under-methylated position may be misinterpreted as the erroneous site of a linkage or substituent. Here, we describe an alternative approach to linkage analysis that makes use of a nonreversible deuterium exchange of C-H protons on the carbohydrate backbone. The exchange reaction is conducted in deuterated water catalyzed by Raney nickel, and results in the selective exchange of C-H protons adjacent to free hydroxyl groups. Hence, the position of the residual C-H protons is indicative of the position of glycosidic linkages or other substituents and can be readily assigned by heteronuclear single quantum coherence-nuclear magnetic resonance (HSQC-NMR) or, following suitable derivatization, by gas chromatography-mass spectroscopy (GC/MS) analysis. Moreover, because the only changes to the parent sugar are proton/deuterium exchanges, the composition and linkage analysis can be determined in a single step.

  9. Nickel-Catalyzed Proton-Deuterium Exchange (HDX) Procedures for Glycosidic Linkage Analysis of Complex Carbohydrates.

    PubMed

    Price, Neil P J; Hartman, Trina M; Vermillion, Karl E

    2015-07-21

    The structural analysis of complex carbohydrates typically requires the assignment of three parameters: monosaccharide composition, the position of glycosidic linkages between monosaccharides, and the position and nature of noncarbohydrate substituents. The glycosidic linkage positions are often determined by permethylation analysis, but this can be complicated by high viscosity or poor solubility, resulting in under-methylation. This is a drawback because an under-methylated position may be misinterpreted as the erroneous site of a linkage or substituent. Here, we describe an alternative approach to linkage analysis that makes use of a nonreversible deuterium exchange of C-H protons on the carbohydrate backbone. The exchange reaction is conducted in deuterated water catalyzed by Raney nickel, and results in the selective exchange of C-H protons adjacent to free hydroxyl groups. Hence, the position of the residual C-H protons is indicative of the position of glycosidic linkages or other substituents and can be readily assigned by heteronuclear single quantum coherence-nuclear magnetic resonance (HSQC-NMR) or, following suitable derivatization, by gas chromatography-mass spectroscopy (GC/MS) analysis. Moreover, because the only changes to the parent sugar are proton/deuterium exchanges, the composition and linkage analysis can be determined in a single step. PMID:26075577

  10. Acylation of Chiral Alcohols: A Simple Procedure for Chiral GC Analysis

    PubMed Central

    Oromí-Farrús, Mireia; Torres, Mercè; Canela, Ramon

    2012-01-01

    The use of iodine as a catalyst and either acetic or trifluoroacetic acid as a derivatizing reagent for determining the enantiomeric composition of acyclic and cyclic aliphatic chiral alcohols was investigated. Optimal conditions were selected according to the molar ratio of alcohol to acid, the reaction time, and the reaction temperature. Afterwards, chiral stability of chiral carbons was studied. Although no isomerization was observed when acetic acid was used, partial isomerization was detected with the trifluoroacetic acid. A series of chiral alcohols of a widely varying structural type were then derivatized with acetic acid using the optimal conditions. The resolution of the enantiomeric esters and the free chiral alcohols was measured using a capillary gas chromatograph equipped with a CP Chirasil-DEX CB column. The best resolutions were obtained with 2-pentyl acetates (α = 3.00) and 2-hexyl acetates (α = 1.95). This method provides a very simple and efficient experimental workup procedure for analyzing chiral alcohols by chiral-phase GC. PMID:22649749

  11. Inverse scattering transform analysis of rogue waves using local periodization procedure

    PubMed Central

    Randoux, Stéphane; Suret, Pierre; El, Gennady

    2016-01-01

    The nonlinear Schrödinger equation (NLSE) stands out as the dispersive nonlinear partial differential equation that plays a prominent role in the modeling and understanding of the wave phenomena relevant to many fields of nonlinear physics. The question of random input problems in the one-dimensional and integrable NLSE enters within the framework of integrable turbulence, and the specific question of the formation of rogue waves (RWs) has been recently extensively studied in this context. The determination of exact analytic solutions of the focusing 1D-NLSE prototyping RW events of statistical relevance is now considered as the problem of central importance. Here we address this question from the perspective of the inverse scattering transform (IST) method that relies on the integrable nature of the wave equation. We develop a conceptually new approach to the RW classification in which appropriate, locally coherent structures are specifically isolated from a globally incoherent wave train to be subsequently analyzed by implementing a numerical IST procedure relying on a spatial periodization of the object under consideration. Using this approach we extend the existing classifications of the prototypes of RWs from standard breathers and their collisions to more general nonlinear modes characterized by their nonlinear spectra. PMID:27385164

  12. Inverse scattering transform analysis of rogue waves using local periodization procedure

    NASA Astrophysics Data System (ADS)

    Randoux, Stéphane; Suret, Pierre; El, Gennady

    2016-07-01

    The nonlinear Schrödinger equation (NLSE) stands out as the dispersive nonlinear partial differential equation that plays a prominent role in the modeling and understanding of the wave phenomena relevant to many fields of nonlinear physics. The question of random input problems in the one-dimensional and integrable NLSE enters within the framework of integrable turbulence, and the specific question of the formation of rogue waves (RWs) has been recently extensively studied in this context. The determination of exact analytic solutions of the focusing 1D-NLSE prototyping RW events of statistical relevance is now considered as the problem of central importance. Here we address this question from the perspective of the inverse scattering transform (IST) method that relies on the integrable nature of the wave equation. We develop a conceptually new approach to the RW classification in which appropriate, locally coherent structures are specifically isolated from a globally incoherent wave train to be subsequently analyzed by implementing a numerical IST procedure relying on a spatial periodization of the object under consideration. Using this approach we extend the existing classifications of the prototypes of RWs from standard breathers and their collisions to more general nonlinear modes characterized by their nonlinear spectra.

  13. Methods of analysis by the U.S. Geological Survey National Water Quality Laboratory; comparison of a nitric acid in-bottle digestion procedure to other whole-water digestion procedures

    USGS Publications Warehouse

    Garbarino, John R.; Hoffman, Gerald L.

    1999-01-01

    A hydrochloric acid in-bottle digestion procedure is used to partially digest wholewater samples prior to determining recoverable elements by various analytical methods. The use of hydrochloric acid is problematic for some methods of analysis because of spectral interference. The inbottle digestion procedure has been modified to eliminate such interference by using nitric acid instead of hydrochloric acid in the digestion. Implications of this modification are evaluated by comparing results for a series of synthetic whole-water samples. Results are also compared with those obtained by using U.S. Environmental Protection Agency (1994) (USEPA) Method 200.2 total-recoverable digestion procedure. Percentage yields that use the nitric acid inbottle digestion procedure are within 10 percent of the hydrochloric acid in-bottle yields for 25 of the 26 elements determined in two of the three synthetic whole-water samples tested. Differences in percentage yields for the third synthetic whole-water sample were greater than 10 percent for 16 of the 26 elements determined. The USEPA method was the most rigorous for solubilizing elements from particulate matter in all three synthetic whole-water samples. Nevertheless, the variability in the percentage yield by using the USEPA digestion procedure was generally greater than the in-bottle digestion procedure, presumably because of the difficulty in controlling the digestion conditions accurately.

  14. Fluorescence Intercalibration Experiment: a Multi-laboratory Comparison of Correction Procedures for Fluorescence Analysis of Dissolved Organic Matter

    NASA Astrophysics Data System (ADS)

    Boehme, J.; Stedmon, C.; Boyd, T.; Chen, R.; Coble, P.; Cooper, W.; Mopper, K.; Wells, M.; Zepp, R.

    2006-12-01

    Measurement of the fluorescence properties of dissolved organic matter (DOM) provides a window into the biological, chemical and physical processes that affect this significant portion of the global carbon pool. Parameters such as fluorescence intensity, quantum yields, peak bandwidth and peak position provide the basis for interpretation of DOM chemical and environmental variability. Generating reliable parameters from fluorescence data requires both correction for instrument bias and standardized experimental methods. The development, publication and use of correction procedures across different fluorometer platforms has proceeded, however the level of variability among corrected fluorescence data in the general DOM community has not been assessed recently. To that end, an intercalibration study was undertaken to examine the current status of correction procedures with the excitation emission matrix spectroscopy technique (EEMS). Analyses of quinine sulfate standard reference material, Suwannee river fulvic acid, and unconcentrated seawater from the Hudson Canyon were performed by 8 participating laboratories. Statistical analysis of fluorescence variability among laboratories will be discussed, along with implications for future fluorescence analysis of DOM.

  15. Analysis of mitogen-activated protein kinase activity in yeast.

    PubMed

    Elion, Elaine A; Sahoo, Rupam

    2010-01-01

    Mitogen-activated protein (MAP) kinases play central roles in transmitting extracellular and intracellular information in a wide variety of situations in eukaryotic cells. Their activities are perturbed in a large number of diseases, and their activating kinases are currently therapeutic targets in cancer. MAPKs are highly conserved among all eukaryotes. MAPKs were first cloned from the yeast Saccharomyces cerevisiae. Yeast has five MAPKs and one MAPK-like kinase. The mating MAPK Fus3 is the best characterized yeast MAPK. Members of all subfamilies of human MAPKs can functionally substitute S. cerevisiae MAPKs, providing systems to use genetic approaches to study the functions of either yeast or human MAPKs and to identify functionally relevant amino acid residues that enhance or reduce the effects of therapeutically relevant inhibitors and regulatory proteins. Here, we describe an assay to measure Fus3 activity in immune complexes prepared from S. cerevisiae extracts. The assay conditions are applicable to other MAPKs, as well. PMID:20811996

  16. A Comparison of Missing-Data Procedures for Arima Time-Series Analysis

    ERIC Educational Resources Information Center

    Velicer, Wayne F.; Colby, Suzanne M.

    2005-01-01

    Missing data are a common practical problem for longitudinal designs. Time-series analysis is a longitudinal method that involves a large number of observations on a single unit. Four different missing-data methods (deletion, mean substitution, mean of adjacent observations, and maximum likelihood estimation) were evaluated. Computer-generated…

  17. A Simple Gauss-Newton Procedure for Covariance Structure Analysis with High-Level Computer Languages.

    ERIC Educational Resources Information Center

    Cudeck, Robert; And Others

    1993-01-01

    An implementation of the Gauss-Newton algorithm for the analysis of covariance structure that is specifically adapted for high-level computer languages is reviewed. This simple method for estimating structural equation models is useful for a variety of standard models, as is illustrated. (SLD)

  18. 14 CFR Appendix I to Part 417 - Methodologies for Toxic Release Hazard Analysis and Operational Procedures

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... hazards and each hazard control involved in the process. An analysis that complies with 29 CFR 1910.119(e... physics on the transport and diffusion of each toxicant. (5) Meteorological conditions at the time of..., or for use in any real-time physics models used to ensure compliance with the toxic flight...

  19. Acoustic emission analysis as a non-destructive test procedure for fiber compound structures

    NASA Technical Reports Server (NTRS)

    Block, J.

    1983-01-01

    The concept of acoustic emission analysis is explained in scientific terms. The detection of acoustic events, their localization, damage discrimination, and event summation curves are discussed. A block diagram of the concept of damage-free testing of fiber-reinforced synthetic materials is depicted. Prospects for application of the concept are assessed.

  20. Enhancing Classification by Combining Discriminant Analysis with a Modified AID Procedure.

    ERIC Educational Resources Information Center

    Houston, Sam; Schmidt, Stephen R.

    1987-01-01

    A modified automatic interaction detector (MAID) was combined with discriminant analysis (DA) to form MAIDDA and to determine whether it might improve classification results over classical DA methods. Application of MAIDDA to the 1983 and 1984 Air Force Academy's graduating classes (N = 3,012) showed that it is a promising clustering technique.…

  1. A New SAS Procedure for Latent Transition Analysis: Transitions in Dating and Sexual Risk Behavior

    ERIC Educational Resources Information Center

    Lanza, Stephanie T.; Collins, Linda M.

    2008-01-01

    The set of statistical methods available to developmentalists is continually being expanded, allowing for questions about change over time to be addressed in new, informative ways. Indeed, new developments in methods to model change over time create the possibility for new research questions to be posed. Latent transition analysis, a longitudinal…

  2. Development of calibration training and procedures using job-task analysis

    SciTech Connect

    Smith, R.A.

    1993-12-01

    Efforts to handle an increased workload with dwindling manpower in the Physical and Electrical Standards Laboratory (Standards Lab) at the Oak Ridge Y-12 Plant are described. Empowerment of workers via Total Quality Management (TQM) is the basis for their efforts. A survey and follow-up team work was the course of action. The job-task analysis received honors by their peers at the Y-12 Plant.

  3. [Analytical procedure of variable number of tandem repeats (VNTR) analysis and effective use of analysis results for tuberculosis control].

    PubMed

    Hachisu, Yushi; Hashimoto, Ruiko; Kishida, Kazunori; Yokoyama, Eiji

    2013-12-01

    Variable number of tandem repeats (VNTR) analysis is one of the methods for molecular epidemiological studies of Mycobacterium tuberculosis. VNTR analysis is a method based on PCR, provides rapid highly reproducible results and higher strain discrimination power than the restriction fragment length polymorphism (RFLP) analysis widely used in molecular epidemiological studies of Mycobacterium tuberculosis. Genetic lineage compositions of Mycobacterium tuberculosis clinical isolates differ among the regions from where they are isolated, and allelic diversity at each locus also differs among the genetic lineages of Mycobacterium tuberculosis. Therefore, the combination of VNTR loci that can provide high discrimination capacity for analysis is not common in every region. The Japan Anti-Tuberculosis Association (JATA) 12 (15) reported a standard combination of VNTR loci for analysis in Japan, and the combination with hypervariable (HV) loci added to JATA12 (15), which has very high discrimination capacity, was also reported. From these reports, it is thought that data sharing between institutions and construction of a nationwide database will progress from now on. Using database construction of VNTR profiles, VNTR analysis has become an effective tool to trace the route of tuberculosis infection, and also helps in decision-making in the treatment course. However, in order to utilize the results of VNTR analysis effectively, it is important that each related organization cooperates closely, and analysis should be appropriately applied in the system in which accurate control and private information protection are ensured.

  4. Hair analysis in order to evaluate drug abuse in driver's license regranting procedures.

    PubMed

    Tassoni, G; Mirtella, D; Zampi, M; Ferrante, L; Cippitelli, M; Cognigni, E; Froldi, R; Cingolani, M

    2014-11-01

    In Italy, driving under the influence of drugs determines the suspension of the offender's driver's license. To regain the license the person must be drug free during an observation period. People whose license has been revoked or suspended can obtain, or re-obtain their driver's license subject to the judgment of a medical commission. The exclusion of illicit drug use is determined by means of toxicological analysis, mainly on urine or hair matrices. We reported the results of several years of experience of the forensic toxicology laboratory of the University of Macerata in the use of hair analysis for the assessment of past exposure to drugs in people suspected of driving under the influence of drugs. From 2004 to 2013, 8612 hair samples, were analyzed for opiates, cocaine and delta-9-tetrahydrocannabinol (Δ(9)-THC) using gas chromatography/mass spectrometry (GC/MS) method. We used a cutoff (SoHT or national guidelines) to determine the positive data, regardless of the hair sample concentrations. 1213 samples resulted positive, 71.7% were positive for cocaine and metabolites, 19.8% for morphine and metabolites, 8.5% for Δ(9)-THC. We also studied the timeframe of the abuse, as well as gender and age distribution of positive subjects. Moreover, we analyzed the possible deterrent effect of the hair analysis on driving under the influence of psychoactive substances. PMID:25151106

  5. Hair analysis in order to evaluate drug abuse in driver's license regranting procedures.

    PubMed

    Tassoni, G; Mirtella, D; Zampi, M; Ferrante, L; Cippitelli, M; Cognigni, E; Froldi, R; Cingolani, M

    2014-11-01

    In Italy, driving under the influence of drugs determines the suspension of the offender's driver's license. To regain the license the person must be drug free during an observation period. People whose license has been revoked or suspended can obtain, or re-obtain their driver's license subject to the judgment of a medical commission. The exclusion of illicit drug use is determined by means of toxicological analysis, mainly on urine or hair matrices. We reported the results of several years of experience of the forensic toxicology laboratory of the University of Macerata in the use of hair analysis for the assessment of past exposure to drugs in people suspected of driving under the influence of drugs. From 2004 to 2013, 8612 hair samples, were analyzed for opiates, cocaine and delta-9-tetrahydrocannabinol (Δ(9)-THC) using gas chromatography/mass spectrometry (GC/MS) method. We used a cutoff (SoHT or national guidelines) to determine the positive data, regardless of the hair sample concentrations. 1213 samples resulted positive, 71.7% were positive for cocaine and metabolites, 19.8% for morphine and metabolites, 8.5% for Δ(9)-THC. We also studied the timeframe of the abuse, as well as gender and age distribution of positive subjects. Moreover, we analyzed the possible deterrent effect of the hair analysis on driving under the influence of psychoactive substances.

  6. A Guide for Developing Standard Operating Job Procedures for the Activated Sludge - Aeration & Sedimentation Process Wastewater Treatment Facility. SOJP No. 5.

    ERIC Educational Resources Information Center

    Mason, George J.

    This guide for developing standard operating job procedures for wastewater treatment facilities is devoted to the activated sludge aeration and sedimentation process. This process is for conversion of nonsettleable and nonfloatable materials in wastewater to settleable, floculated biological groups and separation of the settleable solids from the…

  7. Small Schools Mathematics Curriculum, Grades 7-8: Reading, Language Arts, Mathematics, Science, Social Studies. Scope, Objectives, Activities, Resources, Monitoring Procedures.

    ERIC Educational Resources Information Center

    Hartl, David, Ed.; And Others

    Developed during the 1976-77 school year to assist Washington grade 7-8 teachers in small school districts with the improvement of curriculum and instruction, this learning-objective-based curriculum suggests activities, monitoring procedures and resources for mathematics. Introductory materials describe the organization of Small School materials,…

  8. The Effectiveness of Embedded Teaching through the Most-to-Least Prompting Procedure in Concept Teaching to Children with Autism within Orff-Based Music Activities

    ERIC Educational Resources Information Center

    Eren, Bilgehan; Deniz, Jale; Duzkantar, Ayten

    2013-01-01

    The purpose of this study was to demonstrate the effectiveness of embedded teaching through the most-to-least prompting procedure in concept teaching to children with autism in Orff-based music activities. In this research, being one of the single subject research designs, multiple probe design was used. The generalization effect of the research…

  9. Rapid and Efficient Filtration-Based Procedure for Separation and Safe Analysis of CBRN Mixed Samples

    PubMed Central

    Bentahir, Mostafa; Laduron, Frederic; Irenge, Leonid; Ambroise, Jérôme; Gala, Jean-Luc

    2014-01-01

    Separating CBRN mixed samples that contain both chemical and biological warfare agents (CB mixed sample) in liquid and solid matrices remains a very challenging issue. Parameters were set up to assess the performance of a simple filtration-based method first optimized on separate C- and B-agents, and then assessed on a model of CB mixed sample. In this model, MS2 bacteriophage, Autographa californica nuclear polyhedrosis baculovirus (AcNPV), Bacillus atrophaeus and Bacillus subtilis spores were used as biological agent simulants whereas ethyl methylphosphonic acid (EMPA) and pinacolyl methylphophonic acid (PMPA) were used as VX and soman (GD) nerve agent surrogates, respectively. Nanoseparation centrifugal devices with various pore size cut-off (30 kD up to 0.45 µm) and three RNA extraction methods (Invisorb, EZ1 and Nuclisens) were compared. RNA (MS2) and DNA (AcNPV) quantification was carried out by means of specific and sensitive quantitative real-time PCRs (qPCR). Liquid chromatography coupled to time-of-flight mass spectrometry (LC/TOFMS) methods was used for quantifying EMPA and PMPA. Culture methods and qPCR demonstrated that membranes with a 30 kD cut-off retain more than 99.99% of biological agents (MS2, AcNPV, Bacillus Atrophaeus and Bacillus subtilis spores) tested separately. A rapid and reliable separation of CB mixed sample models (MS2/PEG-400 and MS2/EMPA/PMPA) contained in simple liquid or complex matrices such as sand and soil was also successfully achieved on a 30 kD filter with more than 99.99% retention of MS2 on the filter membrane, and up to 99% of PEG-400, EMPA and PMPA recovery in the filtrate. The whole separation process turnaround-time (TAT) was less than 10 minutes. The filtration method appears to be rapid, versatile and extremely efficient. The separation method developed in this work constitutes therefore a useful model for further evaluating and comparing additional separation alternative procedures for a safe handling and

  10. Fluoroscopy-guided jejunal extension tube placement through existing gastrostomy tubes: analysis of 391 procedures

    PubMed Central

    Uflacker, Andre; Qiao, Yujie; Easley, Genevieve; Patrie, James; Lambert, Drew; de Lange, Eduard E.

    2015-01-01

    PURPOSE We aimed to evaluate the safety and efficacy of fluoroscopically placed jejunal extension tubes (J-arm) in patients with existing gastrostomy tubes. METHODS We conducted a retrospective review of 391 J-arm placements performed in 174 patients. Indications for jejunal nutrition were aspiration risk (35%), pancreatitis (17%), gastroparesis (13%), gastric outlet obstruction (12%), and other (23%). Technical success, complications, malfunctions, and patency were assessed. Percutaneous gastrostomy (PEG) tube location, J-arm course, and fluoroscopy time were correlated with success/failure. Failure was defined as inability to exit the stomach. Procedure-related complications were defined as adverse events related to tube placement occurring within seven days. Tube malfunctions and aspiration events were recorded and assessed. RESULTS Technical success was achieved in 91.9% (95% CI, 86.7%–95.2%) of new tubes versus 94.2% (95% CI, 86.7%–95.2%) of replacements (P = 0.373). Periprocedural complications occurred in three patients (0.8%). Malfunctions occurred in 197 patients (50%). Median tube patency was 103 days (95% CI, 71–134 days). No association was found between successful J-arm placement and gastric PEG tube position (P = 0.677), indication for jejunal nutrition (P = 0.349), J-arm trajectory in the stomach and incidence of malfunction (P = 0.365), risk of tube migration and PEG tube position (P = 0.173), or J-arm length (P = 0.987). A fluoroscopy time of 21.3 min was identified as a threshold for failure. Malfunctions occurred more often in tubes replaced after 90 days than in tubes replaced before 90 days (P < 0.001). A total of 42 aspiration events occurred (OR 6.4, P < 0.001, compared with nonmalfunctioning tubes). CONCLUSION Fluoroscopy-guided J-arm placement is safe for patients requiring jejunal nutrition. Tubes indwelling for longer than 90 days have higher rates of malfunction and aspiration. PMID:26380895

  11. The use of non-linear analysis for differentiating the biomagnetic activity in ovarian lesions.

    PubMed

    Anninos, P A; Anastasiadis, P; Kotini, A

    1999-05-01

    In this study we investigated the biomagnetic activity measured with the superconducting quantum interference device (SQUID) in benign and malignant ovarian lesions using non-linear analysis. We used a single channel biomagnetometer SQUID in order to measure the magnetic field emitted from benign and malignant ovarian lesions. We can differentiate such biomagnetic activities using non-linear analysis. Using the application of non-linear analysis in the ovarian lesions together with the use of dimensional calculations we have observed a clear saturation value for the dimension of malignant ovarian lesions and non-saturation for benign ovarian lesions. The biomagnetic measurements with the SQUID and the application of non-linear analysis in benign and malignant ovarian lesions, is a promising procedure in assessing and differentiating ovarian tumours. PMID:15512296

  12. An in-gel digestion procedure that facilitates the identification of highly hydrophobic proteins by electrospray ionization-mass spectrometry analysis.

    PubMed

    Castellanos-Serra, Lila; Ramos, Yassel; Huerta, Vivian

    2005-07-01

    A procedure is described for in-gel tryptic digestion of proteins that allows the direct analysis of eluted peptides in electrospray ionization (ESI) mass spectrometers without the need of a postdigestion desalting step. It is based on the following principles: (a) a thorough desalting of the protein in-gel before digestion that takes advantage of the excellent properties of acrylamide polymers for size exclusion separations, (b) exploiting the activity of trypsin in water, in the absence of inorganic buffers, and (c) a procedure for peptide extraction using solvents of proven efficacy with highly hydrophobic peptides. Quality of spectra and sequence coverage are equivalent to those obtained after digestion in ammonium bicarbonate for hydrophilic proteins detected with Coomassie blue, mass spectrometry-compatible silver or imidazole-zinc but are significantly superior for highly hydrophobic proteins, such as membrane proteins with several transmembrane domains. ATPase subunit 9 (GRAVY 1.446) is a membrane protein channel, lipid-binding protein for which both the conventional in-gel digestion protocol and in solution digestion failed. It was identified with very high sequence coverage. Sample handling after digestion is notably simplified as peptides are directly loaded into the ESI source without postdigestion processing, increasing the chances for the identification of hydrophobic peptides. PMID:15952229

  13. Analysis, Verification, and Application of Equations and Procedures for Design of Exhaust-pipe Shrouds

    NASA Technical Reports Server (NTRS)

    Ellerbrock, Herman H.; Wcislo, Chester R.; Dexter, Howard E.

    1947-01-01

    Investigations were made to develop a simplified method for designing exhaust-pipe shrouds to provide desired or maximum cooling of exhaust installations. Analysis of heat exchange and pressure drop of an adequate exhaust-pipe shroud system requires equations for predicting design temperatures and pressure drop on cooling air side of system. Present experiments derive such equations for usual straight annular exhaust-pipe shroud systems for both parallel flow and counter flow. Equations and methods presented are believed to be applicable under certain conditions to the design of shrouds for tail pipes of jet engines.

  14. Single Cell Analysis of Transcriptional Activation Dynamics

    PubMed Central

    Rafalska-Metcalf, Ilona U.; Powers, Sara Lawrence; Joo, Lucy M.; LeRoy, Gary; Janicki, Susan M.

    2010-01-01

    Background Gene activation is thought to occur through a series of temporally defined regulatory steps. However, this process has not been completely evaluated in single living mammalian cells. Methodology/Principal Findings To investigate the timing and coordination of gene activation events, we tracked the recruitment of GCN5 (histone acetyltransferase), RNA polymerase II, Brd2 and Brd4 (acetyl-lysine binding proteins), in relation to a VP16-transcriptional activator, to a transcription site that can be visualized in single living cells. All accumulated rapidly with the VP16 activator as did the transcribed RNA. RNA was also detected at significantly more transcription sites in cells expressing the VP16-activator compared to a p53-activator. After α-amanitin pre-treatment, the VP16-activator, GCN5, and Brd2 are still recruited to the transcription site but the chromatin does not decondense. Conclusions/Significance This study demonstrates that a strong activator can rapidly overcome the condensed chromatin structure of an inactive transcription site and supercede the expected requirement for regulatory events to proceed in a temporally defined order. Additionally, activator strength determines the number of cells in which transcription is induced as well as the extent of chromatin decondensation. As chromatin decondensation is significantly reduced after α-amanitin pre-treatment, despite the recruitment of transcriptional activation factors, this provides further evidence that transcription drives large-scale chromatin decondensation. PMID:20422051

  15. The analysis of human health risk with a detailed procedure operating in a GIS environment.

    PubMed

    Morra, P; Bagli, S; Spadoni, G

    2006-05-01

    An approach for quantifying the human health risk caused by industrial sources, which, daily or accidentally, emit dangerous pollutants able to impact on different environmental media, is introduced. The approach is performed by the HHRA-GIS tool which employs an integrated, multimedia, multi-exposure pathways and multi-receptors risk assessment model able to manage all the steps of the analysis in a georeferenced structure. Upper-bound excess lifetime cancer risk and noncarcinogenic hazards are the risk measures, the spatial distribution of which is calculated and mapped on the involved territory, once all the pathways and receptors of the study area are identified. A sensitivity analysis completes the calculations allowing to understand how risk estimates are dependent on variability in the factors contributing to risk. The last part of the paper makes use of a case study concerning a working industrial site to put in evidence in which way the designed tool can help local authorities and policy makers in managing risks and planning remedial and reduction actions. The considered geographical area is a hypothetical territory characterized by residential, agricultural and industrial zones. The presence of two sources of contamination, a municipal waste incinerator (MWI) and a contaminated site, are evaluated by the tool application. Various typologies of receptors have been taken into account, each of them characterized by different anatomical and dietary properties. The achieved results are analyzed, compared with acceptable and background values and alternatives of minor environmental impact calculated. PMID:16356549

  16. Evaluation of Different Normalization and Analysis Procedures for Illumina Gene Expression Microarray Data Involving Small Changes

    PubMed Central

    Johnstone, Daniel M.; Riveros, Carlos; Heidari, Moones; Graham, Ross M.; Trinder, Debbie; Berretta, Regina; Olynyk, John K.; Scott, Rodney J.; Moscato, Pablo; Milward, Elizabeth A.

    2013-01-01

    While Illumina microarrays can be used successfully for detecting small gene expression changes due to their high degree of technical replicability, there is little information on how different normalization and differential expression analysis strategies affect outcomes. To evaluate this, we assessed concordance across gene lists generated by applying different combinations of normalization strategy and analytical approach to two Illumina datasets with modest expression changes. In addition to using traditional statistical approaches, we also tested an approach based on combinatorial optimization. We found that the choice of both normalization strategy and analytical approach considerably affected outcomes, in some cases leading to substantial differences in gene lists and subsequent pathway analysis results. Our findings suggest that important biological phenomena may be overlooked when there is a routine practice of using only one approach to investigate all microarray datasets. Analytical artefacts of this kind are likely to be especially relevant for datasets involving small fold changes, where inherent technical variation—if not adequately minimized by effective normalization—may overshadow true biological variation. This report provides some basic guidelines for optimizing outcomes when working with Illumina datasets involving small expression changes.

  17. Charge-coupled device imaging spectroscopy of Mars. I - Instrumentation and data reduction/analysis procedures

    NASA Technical Reports Server (NTRS)

    Bell, James F., III; Lucey, Paul G.; Mccord, Thomas B.

    1992-01-01

    This paper describes the collection, reduction, and analysis of 0.4-1.0-micron Mars imaging spectroscopy data obtained during the 1988 and 1990 oppositions from Mauna Kea Observatory and provides a general outline for the acquisition and analysis of similar imaging spectroscopy data sets. The U.H. 2.24-m Wide Field Grism CCD Spectrograph was used to collect 13 3D image cubes covering 90 percent of the planet south of 50 deg N in the 0.4-0.8 micron region and covering 55 percent of the planet south of 50 deg N in the 0.5-1.0 micron region. Spectra extracted from these image cubes reveal the detailed character of the Martian near-UV to visible spectrum. Images at red wavelengths reveal the 'classical' albedo markings at 100-500 km spatial resolution while images at blue wavelengths show little surface feature contrast and are dominated by condensate clouds/hazes and polar ice.

  18. A general procedure for estimating dynamic displacements using strain measurements and operational modal analysis

    NASA Astrophysics Data System (ADS)

    Skafte, Anders; Aenlle, Manuel L.; Brincker, Rune

    2016-02-01

    Measurement systems are being installed in more and more civil structures with the purpose of monitoring the general dynamic behavior of the structure. The instrumentation is typically done with accelerometers, where experimental frequencies and mode shapes can be identified using modal analysis and used in health monitoring algorithms. But the use of accelerometers is not suitable for all structures. Structures like wind turbine blades and wings on airplanes can be exposed to lightning, which can cause the measurement systems to fail. Structures like these are often equipped with fiber sensors measuring the in-plane deformation. This paper proposes a method in which the displacement mode shapes and responses can be predicted using only strain measurements. The method relies on the newly discovered principle of local correspondence, which states that each experimental mode can be expressed as a unique subset of finite element modes. In this paper the technique is further developed to predict the mode shapes in different states of the structure. Once an estimate of the modes is found, responses can be predicted using the superposition of the modal coordinates weighted by the mode shapes. The method is validated with experimental tests on a scaled model of a two-span bridge installed with strain gauges. Random load was applied to simulate a civil structure under operating condition, and strain mode shapes were identified using operational modal analysis.

  19. Sediment transport observation in the Draix field laboratory: monitoring procedures and uncertainties analysis

    NASA Astrophysics Data System (ADS)

    Klotz, Sébastien; Mathys, Nicolle

    2010-05-01

    The experimental basins of Draix in the southern French Alps have been monitored since 1984 in order to quantify the erosion in the badlands developed on Black Marl formations. Four small watersheds, from 1 000 m2 to 1 km² are equipped to study and quantify runoff and erosion processes, according to the vegetation cover and basin size. The substratum of the basins is black marls, a very erodible ground which yield a high level of solid transport during floods, especially during the high intensity rainfalls in summer. For all the basins, coarse sediments are stopped by a filter dam and stocked in a sediment trap. After each flood, a topographical survey is conducted with a tacheometer or a rule. The data are used to build a DEM. When the smallest basin (Roubine, 1330 m²) yields small volumes, they are measured with a calibrated bucket. The difference between two successive DEM gives the volume of the transported sediments. The volumes are converted into weight using the bulk density of the deposits. Suspended sediments which flow through the filter dam are sampled with an automatic ISCO sampler. The Suspended Sediment Concentration (SSC) is also measured continuously with optical backscattering sensors. Due to the sensor sensitivity to the grain size distribution, the sensor calibration must be verified for each flood and may be corrected with the concentration obtained from drying and weighting of the samples. The suspended sediment yield of a flood is determined by integrating the sediment concentration value for the whole hydrograph. Uncertainties are encountered at each step of the measurement procedure. They may be summarized in five classes regarding the origin of the errors: - Material. The errors are linked to the variability of the monitored parameter. For example the bulk densities of the Roubine sediment trap deposits vary from 1.3 to 1.6. - Measuring Device. The uncertainties are given by the device manufacturer or obtained from the sensor calibration

  20. High-Throughput Analysis of Enzyme Activities

    SciTech Connect

    Lu, Guoxin

    2007-01-01

    High-throughput screening (HTS) techniques have been applied to many research fields nowadays. Robot microarray printing technique and automation microtiter handling technique allows HTS performing in both heterogeneous and homogeneous formats, with minimal sample required for each assay element. In this dissertation, new HTS techniques for enzyme activity analysis were developed. First, patterns of immobilized enzyme on nylon screen were detected by multiplexed capillary system. The imaging resolution is limited by the outer diameter of the capillaries. In order to get finer images, capillaries with smaller outer diameters can be used to form the imaging probe. Application of capillary electrophoresis allows separation of the product from the substrate in the reaction mixture, so that the product doesn't have to have different optical properties with the substrate. UV absorption detection allows almost universal detection for organic molecules. Thus, no modifications of either the substrate or the product molecules are necessary. This technique has the potential to be used in screening of local distribution variations of specific bio-molecules in a tissue or in screening of multiple immobilized catalysts. Another high-throughput screening technique is developed by directly monitoring the light intensity of the immobilized-catalyst surface using a scientific charge-coupled device (CCD). Briefly, the surface of enzyme microarray is focused onto a scientific CCD using an objective lens. By carefully choosing the detection wavelength, generation of product on an enzyme spot can be seen by the CCD. Analyzing the light intensity change over time on an enzyme spot can give information of reaction rate. The same microarray can be used for many times. Thus, high-throughput kinetic studies of hundreds of catalytic reactions are made possible. At last, we studied the fluorescence emission spectra of ADP and obtained the detection limits for ADP under three different