Sample records for standard experimental methods

  1. Acid Rain Analysis by Standard Addition Titration.

    ERIC Educational Resources Information Center

    Ophardt, Charles E.

    1985-01-01

    The standard addition titration is a precise and rapid method for the determination of the acidity in rain or snow samples. The method requires use of a standard buret, a pH meter, and Gran's plot to determine the equivalence point. Experimental procedures used and typical results obtained are presented. (JN)

  2. Research on the calibration methods of the luminance parameter of radiation luminance meters

    NASA Astrophysics Data System (ADS)

    Cheng, Weihai; Huang, Biyong; Lin, Fangsheng; Li, Tiecheng; Yin, Dejin; Lai, Lei

    2017-10-01

    This paper introduces standard diffusion reflection white plate method and integrating sphere standard luminance source method to calibrate the luminance parameter. The paper compares the effects of calibration results by using these two methods through principle analysis and experimental verification. After using two methods to calibrate the same radiation luminance meter, the data obtained verifies the testing results of the two methods are both reliable. The results show that the display value using standard white plate method has fewer errors and better reproducibility. However, standard luminance source method is more convenient and suitable for on-site calibration. Moreover, standard luminance source method has wider range and can test the linear performance of the instruments.

  3. Methodological Pluralism: The Gold Standard of STEM Evaluation

    ERIC Educational Resources Information Center

    Lawrenz, Frances; Huffman, Douglas

    2006-01-01

    Nationally, there is continuing debate about appropriate methods for conducting educational evaluations. The U.S. Department of Education has placed a priority on "scientifically" based evaluation methods and has advocated a "gold standard" of randomized controlled experimentation. The priority suggests that randomized control methods are best,…

  4. Effects of a Format-based Second Language Teaching Method in Kindergarten.

    ERIC Educational Resources Information Center

    Uilenburg, Noelle; Plooij, Frans X.; de Glopper, Kees; Damhuis, Resi

    2001-01-01

    Focuses on second language teaching with a format-based method. The differences between a format-based teaching method and a standard approach used as treatments in a quasi-experimental, non-equivalent control group are described in detail. Examines whether the effects of a format-based teaching method and a standard foreign language method differ…

  5. Laboratory validation of four black carbon measurement methods for the determination of non-volatile particulate matter (PM) mass emissions . . .

    EPA Science Inventory

    A laboratory-scale experimental program was designed to standardize each of four black carbon measurement methods, provide appropriate quality assurance/control procedures for these techniques, and compare measurements made by these methods to a NIST traceable standard (filter gr...

  6. A SIMPLE METHOD FOR EVALUATING DATA FROM AN INTERLABORATORY STUDY

    EPA Science Inventory

    Large-scale laboratory-and method-performance studies involving more than about 30 laboratories may be evaluated by calculating the HORRAT ratio for each test sample (HORRAT=[experimentally found among-laboratories relative standard deviation] divided by [relative standard deviat...

  7. Comparison of Standardized Test Scores from Traditional Classrooms and Those Using Problem-Based Learning

    ERIC Educational Resources Information Center

    Needham, Martha Elaine

    2010-01-01

    This research compares differences between standardized test scores in problem-based learning (PBL) classrooms and a traditional classroom for 6th grade students using a mixed-method, quasi-experimental and qualitative design. The research shows that problem-based learning is as effective as traditional teaching methods on standardized tests. The…

  8. Preface of "The Second Symposium on Border Zones Between Experimental and Numerical Application Including Solution Approaches By Extensions of Standard Numerical Methods"

    NASA Astrophysics Data System (ADS)

    Ortleb, Sigrun; Seidel, Christian

    2017-07-01

    In this second symposium at the limits of experimental and numerical methods, recent research is presented on practically relevant problems. Presentations discuss experimental investigation as well as numerical methods with a strong focus on application. In addition, problems are identified which require a hybrid experimental-numerical approach. Topics include fast explicit diffusion applied to a geothermal energy storage tank, noise in experimental measurements of electrical quantities, thermal fluid structure interaction, tensegrity structures, experimental and numerical methods for Chladni figures, optimized construction of hydroelectric power stations, experimental and numerical limits in the investigation of rain-wind induced vibrations as well as the application of exponential integrators in a domain-based IMEX setting.

  9. Statistical density modification using local pattern matching

    DOEpatents

    Terwilliger, Thomas C.

    2007-01-23

    A computer implemented method modifies an experimental electron density map. A set of selected known experimental and model electron density maps is provided and standard templates of electron density are created from the selected experimental and model electron density maps by clustering and averaging values of electron density in a spherical region about each point in a grid that defines each selected known experimental and model electron density maps. Histograms are also created from the selected experimental and model electron density maps that relate the value of electron density at the center of each of the spherical regions to a correlation coefficient of a density surrounding each corresponding grid point in each one of the standard templates. The standard templates and the histograms are applied to grid points on the experimental electron density map to form new estimates of electron density at each grid point in the experimental electron density map.

  10. The experimental determination of the moments of inertia of airplanes by a simplified compound-pendulum method

    NASA Technical Reports Server (NTRS)

    Gracey, William

    1948-01-01

    A simplified compound-pendulum method for the experimental determination of the moments of inertia of airplanes about the x and y axes is described. The method is developed as a modification of the standard pendulum method reported previously in NACA report, NACA-467. A brief review of the older method is included to form a basis for discussion of the simplified method. (author)

  11. Role of Microstructure in High Temperature Oxidation.

    DTIC Science & Technology

    1980-05-01

    Surface Prepartion Upon Oxidation ......... .................. 20 EXPERIMENTAL METHODS 21 Speciemen Preparation...angle sectioning method 26 Figure 3. Application of the test line upon the image of NiO scale to determine the number of the NiO grain boundary...of knowledge in this field was readily accounted for by extreme experimental difficulty in applying standard methods of microscopy to the thin

  12. Guidelines for standard preclinical experiments in the mouse model of myasthenia gravis induced by acetylcholine receptor immunization.

    PubMed

    Tuzun, Erdem; Berrih-Aknin, Sonia; Brenner, Talma; Kusner, Linda L; Le Panse, Rozen; Yang, Huan; Tzartos, Socrates; Christadoss, Premkumar

    2015-08-01

    Myasthenia gravis (MG) is an autoimmune disorder characterized by generalized muscle weakness due to neuromuscular junction (NMJ) dysfunction brought by acetylcholine receptor (AChR) antibodies in most cases. Although steroids and other immunosuppressants are effectively used for treatment of MG, these medications often cause severe side effects and a complete remission cannot be obtained in many cases. For pre-clinical evaluation of more effective and less toxic treatment methods for MG, the experimental autoimmune myasthenia gravis (EAMG) induced by Torpedo AChR immunization has become one of the standard animal models. Although numerous compounds have been recently proposed for MG mostly by using the active immunization EAMG model, only a few have been proven to be effective in MG patients. The variability in the experimental design, immunization methods and outcome measurements of pre-clinical EAMG studies make it difficult to interpret the published reports and assess the potential for application to MG patients. In an effort to standardize the active immunization EAMG model, we propose standard procedures for animal care conditions, sampling and randomization of mice, experimental design and outcome measures. Utilization of these standard procedures might improve the power of pre-clinical EAMG experiments and increase the chances for identifying promising novel treatment methods that can be effectively translated into clinical trials for MG. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Ethical review of human experimentation in the consumer products industry.

    PubMed

    Steadman, J H

    1998-04-01

    Ethical review of human experimentation in the consumer products industry is important and provides instructive parallels and contrasts with clinical medical research. The procedures used in Unilever NV/plc are described. A central body sets standards for and monitors compliance with ethical review of human studies throughout Unilever. Guidance has been produced on many topics including issues applying generally to human experimentation and more specifically to the consumer products sector. Deficiencies and inconsistencies in the procedures for ethical review and the care of subjects during the conduct of studies have been identified and corrected. Appropriate uniform standards have been achieved across all Unilever operations. All human experimentation in the industry needs adequate ethical review. Although the methods used by individual companies may differ, procedures must ensure uniform high standards across a global industry.

  14. A new mathematical approach for the estimation of the AUC and its variability under different experimental designs in preclinical studies.

    PubMed

    Navarro-Fontestad, Carmen; González-Álvarez, Isabel; Fernández-Teruel, Carlos; Bermejo, Marival; Casabó, Vicente Germán

    2012-01-01

    The aim of the present work was to develop a new mathematical method for estimating the area under the curve (AUC) and its variability that could be applied in different preclinical experimental designs and amenable to be implemented in standard calculation worksheets. In order to assess the usefulness of the new approach, different experimental scenarios were studied and the results were compared with those obtained with commonly used software: WinNonlin® and Phoenix WinNonlin®. The results do not show statistical differences among the AUC values obtained by both procedures, but the new method appears to be a better estimator of the AUC standard error, measured as the coverage of 95% confidence interval. In this way, the new proposed method demonstrates to be as useful as WinNonlin® software when it was applicable. Copyright © 2011 John Wiley & Sons, Ltd.

  15. Quasi-Experimental Evaluations. Part 6 in a Series on Practical Evaluation Methods. Research-to-Results Brief. Publication #2008-04

    ERIC Educational Resources Information Center

    Moore, Kristin Anderson

    2008-01-01

    Although experimental studies are described as the "gold standard" for assessing the effectiveness of a program in changing outcomes, in some cases, quasi-experimental studies may be more feasible or appropriate. Many types of quasi-experimental studies are possible. For example, an implementation study can provide valuable information on whether,…

  16. Energetic studies and phase diagram of thioxanthene.

    PubMed

    Freitas, Vera L S; Monte, Manuel J S; Santos, Luís M N B F; Gomes, José R B; Ribeiro da Silva, Maria D M C

    2009-11-19

    The molecular stability of thioxanthene, a key species from which very important compounds with industrial relevance are derived, has been studied by a combination of several experimental techniques and computational approaches. The standard (p degrees = 0.1 MPa) molar enthalpy of formation of crystalline thioxanthene (117.4 +/- 4.1 kJ x mol(-1)) was determined from the experimental standard molar energy of combustion, in oxygen, measured by rotating-bomb combustion calorimetry at T = 298.15 K. The enthalpy of sublimation was determined by a direct method, using the vacuum drop microcalorimetric technique, and also by an indirect method, using a static apparatus, where the vapor pressures at different temperatures were measured. The latter technique was used for both crystalline and undercooled liquid samples, and the phase diagram of thioxanthene near the triple point was obtained (triple point coordinates T = 402.71 K and p = 144.7 Pa). From the two methods, a mean value for the standard (p degrees = 0.1 MPa) molar enthalpy of sublimation, at T = 298.15 K (101.3 +/- 0.8 kJ x mol(-1)), was derived. From the latter value and from the enthalpy of formation of the solid, the standard (p degrees = 0.1 MPa) enthalpy of formation of gaseous thioxanthene was calculated as 218.7 +/- 4.2 kJ x mol(-1). Standard ab initio molecular orbital calculations were performed using the G3(MP2)//B3LYP composite procedure and several homodesmotic reactions in order to derive the standard molar enthalpy of formation of thioxanthene. The ab initio results are in excellent agreement with the experimental data.

  17. Recent Work in Hybrid Radiation Transport Methods with Applications to Commercial Nuclear Power

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kulesza, Joel A.

    This talk will begin with an overview of hybrid radiation transport methods followed by a discussion of the author’s work to advance current capabilities. The talk will then describe applications for these methods in commercial nuclear power reactor analyses and techniques for experimental validation. When discussing these analytical and experimental activities, the importance of technical standards such as those created and maintained by ASTM International will be demonstrated.

  18. AGARD standard aeroelastic configurations for dynamic response. Candidate configuration I.-wing 445.6

    NASA Technical Reports Server (NTRS)

    Yates, E. Carson, Jr.

    1987-01-01

    To promote the evaluation of existing and emerging unsteady aerodynamic codes and methods for applying them to aeroelastic problems, especially for the transonic range, a limited number of aerodynamic configurations and experimental dynamic response data sets are to be designated by the AGARD Structures and Materials Panel as standards for comparison. This set is a sequel to that established several years ago for comparisons of calculated and measured aerodynamic pressures and forces. This report presents the information needed to perform flutter calculations for the first candidate standard configuration for dynamic response along with the related experimental flutter data.

  19. A comparative uncertainty study of the calibration of macrolide antibiotic reference standards using quantitative nuclear magnetic resonance and mass balance methods.

    PubMed

    Liu, Shu-Yu; Hu, Chang-Qin

    2007-10-17

    This study introduces the general method of quantitative nuclear magnetic resonance (qNMR) for the calibration of reference standards of macrolide antibiotics. Several qNMR experimental conditions were optimized including delay, which is an important parameter of quantification. Three kinds of macrolide antibiotics were used to validate the accuracy of the qNMR method by comparison with the results obtained by the high performance liquid chromatography (HPLC) method. The purities of five common reference standards of macrolide antibiotics were measured by the 1H qNMR method and the mass balance method, respectively. The analysis results of the two methods were compared. The qNMR is quick and simple to use. In a new medicine research and development process, qNMR provides a new and reliable method for purity analysis of the reference standard.

  20. Growth rate measurement in free jet experiments

    NASA Astrophysics Data System (ADS)

    Charpentier, Jean-Baptiste; Renoult, Marie-Charlotte; Crumeyrolle, Olivier; Mutabazi, Innocent

    2017-07-01

    An experimental method was developed to measure the growth rate of the capillary instability for free liquid jets. The method uses a standard shadow-graph imaging technique to visualize a jet, produced by extruding a liquid through a circular orifice, and a statistical analysis of the entire jet. The analysis relies on the computation of the standard deviation of a set of jet profiles, obtained in the same experimental conditions. The principle and robustness of the method are illustrated with a set of emulated jet profiles. The method is also applied to free falling jet experiments conducted for various Weber numbers and two low-viscosity solutions: a Newtonian and a viscoelastic one. Growth rate measurements are found in good agreement with linear stability theory in the Rayleigh's regime, as expected from previous studies. In addition, the standard deviation curve is used to obtain an indirect measurement of the initial perturbation amplitude and to identify beads on a string structure on the jet. This last result serves to demonstrate the capability of the present technique to explore in the future the dynamics of viscoelastic liquid jets.

  1. Quasi-experimental Studies in the Fields of Infection Control and Antibiotic Resistance, Ten Years Later: A Systematic Review.

    PubMed

    Alsaggaf, Rotana; O'Hara, Lyndsay M; Stafford, Kristen A; Leekha, Surbhi; Harris, Anthony D

    2018-02-01

    OBJECTIVE A systematic review of quasi-experimental studies in the field of infectious diseases was published in 2005. The aim of this study was to assess improvements in the design and reporting of quasi-experiments 10 years after the initial review. We also aimed to report the statistical methods used to analyze quasi-experimental data. DESIGN Systematic review of articles published from January 1, 2013, to December 31, 2014, in 4 major infectious disease journals. METHODS Quasi-experimental studies focused on infection control and antibiotic resistance were identified and classified based on 4 criteria: (1) type of quasi-experimental design used, (2) justification of the use of the design, (3) use of correct nomenclature to describe the design, and (4) statistical methods used. RESULTS Of 2,600 articles, 173 (7%) featured a quasi-experimental design, compared to 73 of 2,320 articles (3%) in the previous review (P<.01). Moreover, 21 articles (12%) utilized a study design with a control group; 6 (3.5%) justified the use of a quasi-experimental design; and 68 (39%) identified their design using the correct nomenclature. In addition, 2-group statistical tests were used in 75 studies (43%); 58 studies (34%) used standard regression analysis; 18 (10%) used segmented regression analysis; 7 (4%) used standard time-series analysis; 5 (3%) used segmented time-series analysis; and 10 (6%) did not utilize statistical methods for comparisons. CONCLUSIONS While some progress occurred over the decade, it is crucial to continue improving the design and reporting of quasi-experimental studies in the fields of infection control and antibiotic resistance to better evaluate the effectiveness of important interventions. Infect Control Hosp Epidemiol 2018;39:170-176.

  2. A collaborative comparison of objective structured clinical examination (OSCE) standard setting methods at Australian medical schools.

    PubMed

    Malau-Aduli, Bunmi Sherifat; Teague, Peta-Ann; D'Souza, Karen; Heal, Clare; Turner, Richard; Garne, David L; van der Vleuten, Cees

    2017-12-01

    A key issue underpinning the usefulness of the OSCE assessment to medical education is standard setting, but the majority of standard-setting methods remain challenging for performance assessment because they produce varying passing marks. Several studies have compared standard-setting methods; however, most of these studies are limited by their experimental scope, or use data on examinee performance at a single OSCE station or from a single medical school. This collaborative study between 10 Australian medical schools investigated the effect of standard-setting methods on OSCE cut scores and failure rates. This research used 5256 examinee scores from seven shared OSCE stations to calculate cut scores and failure rates using two different compromise standard-setting methods, namely the Borderline Regression and Cohen's methods. The results of this study indicate that Cohen's method yields similar outcomes to the Borderline Regression method, particularly for large examinee cohort sizes. However, with lower examinee numbers on a station, the Borderline Regression method resulted in higher cut scores and larger difference margins in the failure rates. Cohen's method yields similar outcomes as the Borderline Regression method and its application for benchmarking purposes and in resource-limited settings is justifiable, particularly with large examinee numbers.

  3. The application of digital image analysis for blood typing: the comparison of anti-A and anti-B monoclonal antibodies activity with standard hemagglutinating sera

    NASA Astrophysics Data System (ADS)

    Medvedeva, Maria F.; Doubrovski, Valery A.

    2017-03-01

    The resolution of the acousto-optical method for blood typing was estimated experimentally by means of two types of reagents: monoclonal antibodies and standard hemagglutinating sera. The peculiarity of this work is the application of digital photo images processing by pixel analysis previously proposed by the authors. The influence of the concentrations of reagents, of blood sample, which is to be tested, as well as of the duration of the ultrasonic action on the biological object upon the resolution of acousto-optical method were investigated. The optimal experimental conditions to obtain maximum of the resolution of the acousto-optical method were found, it creates the prerequisites for a reliable blood typing. The present paper is a further step in the development of acousto-optical method for determining human blood groups.

  4. [Research progress on mechanical performance evaluation of artificial intervertebral disc].

    PubMed

    Li, Rui; Wang, Song; Liao, Zhenhua; Liu, Weiqiang

    2018-03-01

    The mechanical properties of artificial intervertebral disc (AID) are related to long-term reliability of prosthesis. There are three testing methods involved in the mechanical performance evaluation of AID based on different tools: the testing method using mechanical simulator, in vitro specimen testing method and finite element analysis method. In this study, the testing standard, testing equipment and materials of AID were firstly introduced. Then, the present status of AID static mechanical properties test (static axial compression, static axial compression-shear), dynamic mechanical properties test (dynamic axial compression, dynamic axial compression-shear), creep and stress relaxation test, device pushout test, core pushout test, subsidence test, etc. were focused on. The experimental techniques using in vitro specimen testing method and testing results of available artificial discs were summarized. The experimental methods and research status of finite element analysis were also summarized. Finally, the research trends of AID mechanical performance evaluation were forecasted. The simulator, load, dynamic cycle, motion mode, specimen and test standard would be important research fields in the future.

  5. A studentized permutation test for three-arm trials in the 'gold standard' design.

    PubMed

    Mütze, Tobias; Konietschke, Frank; Munk, Axel; Friede, Tim

    2017-03-15

    The 'gold standard' design for three-arm trials refers to trials with an active control and a placebo control in addition to the experimental treatment group. This trial design is recommended when being ethically justifiable and it allows the simultaneous comparison of experimental treatment, active control, and placebo. Parametric testing methods have been studied plentifully over the past years. However, these methods often tend to be liberal or conservative when distributional assumptions are not met particularly with small sample sizes. In this article, we introduce a studentized permutation test for testing non-inferiority and superiority of the experimental treatment compared with the active control in three-arm trials in the 'gold standard' design. The performance of the studentized permutation test for finite sample sizes is assessed in a Monte Carlo simulation study under various parameter constellations. Emphasis is put on whether the studentized permutation test meets the target significance level. For comparison purposes, commonly used Wald-type tests, which do not make any distributional assumptions, are included in the simulation study. The simulation study shows that the presented studentized permutation test for assessing non-inferiority in three-arm trials in the 'gold standard' design outperforms its competitors, for instance the test based on a quasi-Poisson model, for count data. The methods discussed in this paper are implemented in the R package ThreeArmedTrials which is available on the comprehensive R archive network (CRAN). Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  6. Assessing Methods for Generalizing Experimental Impact Estimates to Target Populations

    ERIC Educational Resources Information Center

    Kern, Holger L.; Stuart, Elizabeth A.; Hill, Jennifer; Green, Donald P.

    2016-01-01

    Randomized experiments are considered the gold standard for causal inference because they can provide unbiased estimates of treatment effects for the experimental participants. However, researchers and policymakers are often interested in using a specific experiment to inform decisions about other target populations. In education research,…

  7. TECHNICAL MANUAL: A SURVEY OF EQUIPMENT AND METHODS FOR PARTICULATE SAMPLING IN INDUSTRIAL PROCESS STREAMS

    EPA Science Inventory

    The manual lists and describes the instruments and techniques that are available for measuring the concentration or size distribution of particles suspended in process streams. The standard, official, well established methods are described as well as some experimental methods and...

  8. Determination of antenna factors using a three-antenna method at open-field test site

    NASA Astrophysics Data System (ADS)

    Masuzawa, Hiroshi; Tejima, Teruo; Harima, Katsushige; Morikawa, Takao

    1992-09-01

    Recently NIST has used the three-antenna method for calibration of the antenna factor of an antenna used for EMI measurements. This method does not require the specially designed standard antennas which are necessary in the standard field method or the standard antenna method, and can be used at an open-field test site. This paper theoretically and experimentally examines the measurement errors of this method and evaluates the precision of the antenna-factor calibration. It is found that the main source of the error is the non-ideal propagation characteristics of the test site, which should therefore be measured before the calibration. The precision of the antenna-factor calibration at the test site used in these experiments, is estimated to be 0.5 dB.

  9. Characterizing optical polishing pitch

    NASA Astrophysics Data System (ADS)

    Varshneya, Rupal; DeGroote, Jessica E.; Gregg, Leslie L.; Jacobs, Stephen D.

    2003-05-01

    Characterization data for five experimental optical polishing pitch products were compared to those for corresponding standard commercial optical polishing pitches. The experimental pitches were tested for three physical properties: hardness, viscosity at 90°C, and softening point. A Shore A Durometer test was used to measure hardness. Viscosity data were collected using a Stony Brook Scientific falling needle viscometer. Softening point was determined using the ASTM D3104-97 method. Results demonstrate that the softest and the hardest batches of the experimental grades of optical pitch are comparable to the industry-accepted standards, while the other grades of pitch are not. The experimental methodology followed in this research may allow opticians to rapidly compare different brands of pitch to help identify batch-to-batch differences and control pitch quality before use.

  10. Characterizing optical polishing pitch

    NASA Astrophysics Data System (ADS)

    Varshneya, Rupal

    2003-05-01

    Characterization data for five experimental optical polishing pitch products were compared to those for corresponding standard commercial optical polishing pitches. The experimental pitches were tested for three physical properties: hardness, viscosity at 90°C, and softening point. A Shore A Durometerl test was used to measure hardness. Viscosity data were collected using a Stony Brook Scientific' falling needle viscometer. Softening point was determined using the ASTM D3104-97 method. Results demonstrate that the softest and the hardest batches of the experimental grades of optical pitch are comparable to the industry-accepted standards, while the other grades of pitch are not. The experimental methodology followed in this research may allow opticians to rapidly compare different brands of pitch to help identify batch- to- batch differences and control pitch quality before use.

  11. An evaluation of objective rating methods for full-body finite element model comparison to PMHS tests.

    PubMed

    Vavalle, Nicholas A; Jelen, Benjamin C; Moreno, Daniel P; Stitzel, Joel D; Gayzik, F Scott

    2013-01-01

    Objective evaluation methods of time history signals are used to quantify how well simulated human body responses match experimental data. As the use of simulations grows in the field of biomechanics, there is a need to establish standard approaches for comparisons. There are 2 aims of this study. The first is to apply 3 objective evaluation methods found in the literature to a set of data from a human body finite element model. The second is to compare the results of each method, examining how they are correlated to each other and the relative strengths and weaknesses of the algorithms. In this study, the methods proposed by Sprague and Geers (magnitude and phase error, SGM and SGP), Rhule et al. (cumulative standard deviation, CSD), and Gehre et al. (CORrelation and Analysis, or CORA, size, phase, shape, corridor) were compared. A 40 kph frontal sled test presented by Shaw et al. was simulated using the Global Human Body Models Consortium midsized male full-body finite element model (v. 3.5). Mean and standard deviation experimental data (n = 5) from Shaw et al. were used as the benchmark. Simulated data were output from the model at the appropriate anatomical locations for kinematic comparison. Force data were output at the seat belts, seat pan, knee, and foot restraints. Objective comparisons from 53 time history data channels were compared to the experimental results. To compare the different methods, all objective comparison metrics were cross-plotted and linear regressions were calculated. The following ratings were found to be statistically significantly correlated (P < .01): SGM and CORrelation and Analysis (CORA) size, R (2) = 0.73; SGP and CORA shape, R (2) = 0.82; and CSD and CORA's corridor factor, R (2) = 0.59. Relative strengths of the correlated ratings were then investigated. For example, though correlated to CORA size, SGM carries a sign to indicate whether the simulated response is greater than or less than the benchmark signal. A further analysis of the advantages and drawbacks of each method is discussed. The results demonstrate that a single metric is insufficient to provide a complete assessment of how well the simulated results match the experiments. The CORA method provided the most comprehensive evaluation of the signal. Regardless of the method selected, one primary recommendation of this work is that for any comparison, the results should be reported to provide separate assessments of a signal's match to experimental variance, magnitude, phase, and shape. Future work planned includes implementing any forthcoming International Organization for Standardization standards for objective evaluations. Supplemental materials are available for this article. Go to the publisher's online edition of Traffic Injury Prevention to view the supplemental file.

  12. Efficacy of Group Based Learning in Learning Moral Value

    ERIC Educational Resources Information Center

    Singaravelu, G.

    2008-01-01

    The present study highlights the efficacy of Group Based Learning on cultivating moral value of the students at Standard VIII. Parallel group Experimental method was adopted in the study. Eighty students (control group = 40 students + experimental = 40 students) were selected as sample for the study. Researcher self-made achievement tool was…

  13. Single-Case Experimental Designs: A Systematic Review of Published Research and Current Standards

    PubMed Central

    Smith, Justin D.

    2013-01-01

    This article systematically reviews the research design and methodological characteristics of single-case experimental design (SCED) research published in peer-reviewed journals between 2000 and 2010. SCEDs provide researchers with a flexible and viable alternative to group designs with large sample sizes. However, methodological challenges have precluded widespread implementation and acceptance of the SCED as a viable complementary methodology to the predominant group design. This article includes a description of the research design, measurement, and analysis domains distinctive to the SCED; a discussion of the results within the framework of contemporary standards and guidelines in the field; and a presentation of updated benchmarks for key characteristics (e.g., baseline sampling, method of analysis), and overall, it provides researchers and reviewers with a resource for conducting and evaluating SCED research. The results of the systematic review of 409 studies suggest that recently published SCED research is largely in accordance with contemporary criteria for experimental quality. Analytic method emerged as an area of discord. Comparison of the findings of this review with historical estimates of the use of statistical analysis indicates an upward trend, but visual analysis remains the most common analytic method and also garners the most support amongst those entities providing SCED standards. Although consensus exists along key dimensions of single-case research design and researchers appear to be practicing within these parameters, there remains a need for further evaluation of assessment and sampling techniques and data analytic methods. PMID:22845874

  14. Quadruple therapy for eradication of Helicobacter pylori

    PubMed Central

    Ma, Hai-Jun; Wang, Jin-Liang

    2013-01-01

    AIM: To investigate quadruple therapy with rabeprazole, amoxicillin, levofloxacin and furazolidone for the eradication of Helicobacter pylori (H. pylori) infection. METHODS: A total of 147 patients were divided into the experimental treatment group (n = 78) and the standard triple treatment group (n = 69). The experimental treatment group received rabeprazole 20 mg, amoxicillin 1.0 g, levofloxacin 0.2 g and furazolidone 0.1 g, twice daily. The standard triple treatment group received omeprazole 20 mg, amoxicillin 1.0 g and clarithromycin 0.5 g, twice daily. RESULTS: One month after treatment, the 13C urea breath test was carried out to detect H. pylori. The eradication rate using per-protocol analysis was 94.3% in the experimental treatment group and 73% in the standard triple treatment group (P < 0.05), and using intention to test analysis, these figures were 86% and 67% in the two groups, respectively. Side effects were observed in 34 patients, and included mild dizziness, nausea, diarrhea and increased bowel movement. Eleven of the 34 patients needed no treatment for their side effects. CONCLUSION: Rabeprazole, amoxicillin, levofloxacin and furazolidone quadruple therapy is a safe method for the eradication of H. pylori with high efficacy and good tolerability. PMID:23429422

  15. Parent-Child Interaction Therapy in a Community Setting: Examining Outcomes, Attrition, and Treatment Setting

    ERIC Educational Resources Information Center

    Lanier, Paul; Kohl, Patrica L.; Benz, Joan; Swinger, Dawn; Moussette, Pam; Drake, Brett

    2011-01-01

    Objectives: The purpose of this study was to evaluate Parent-Child Interaction Therapy (PCIT) deployed in a community setting comparing in-home with the standard office-based intervention. Child behavior, parent stress, parent functioning, and attrition were examined. Methods: Using a quasi-experimental design, standardized measures at three time…

  16. The Effect of Nutrient-Based Standards on Competitive Foods in 3 Schools: Potential Savings in Kilocalories and Grams of Fat

    ERIC Educational Resources Information Center

    Snelling, Anastasia M.; Yezek, Jennifer

    2012-01-01

    Background: The study investigated how nutrient standards affected the number of kilocalories and grams of fat and saturated fat in competitive foods offered and sold in 3 high schools. Methods: The study is a quasi-experimental design with 3 schools serving as the units of assignment and analysis. The effect of the nutrient standards was measured…

  17. Design standards for experimental and field studies to evaluate diagnostic accuracy of tests for infectious diseases in aquatic animals.

    PubMed

    Laurin, E; Thakur, K K; Gardner, I A; Hick, P; Moody, N J G; Crane, M S J; Ernst, I

    2018-05-01

    Design and reporting quality of diagnostic accuracy studies (DAS) are important metrics for assessing utility of tests used in animal and human health. Following standards for designing DAS will assist in appropriate test selection for specific testing purposes and minimize the risk of reporting biased sensitivity and specificity estimates. To examine the benefits of recommending standards, design information from published DAS literature was assessed for 10 finfish, seven mollusc, nine crustacean and two amphibian diseases listed in the 2017 OIE Manual of Diagnostic Tests for Aquatic Animals. Of the 56 DAS identified, 41 were based on field testing, eight on experimental challenge studies and seven on both. Also, we adapted human and terrestrial-animal standards and guidelines for DAS structure for use in aquatic animal diagnostic research. Through this process, we identified and addressed important metrics for consideration at the design phase: study purpose, targeted disease state, selection of appropriate samples and specimens, laboratory analytical methods, statistical methods and data interpretation. These recommended design standards for DAS are presented as a checklist including risk-of-failure points and actions to mitigate bias at each critical step. Adherence to standards when designing DAS will also facilitate future systematic review and meta-analyses of DAS research literature. © 2018 John Wiley & Sons Ltd.

  18. Accuracy and Precision of Three-Dimensional Low Dose CT Compared to Standard RSA in Acetabular Cups: An Experimental Study.

    PubMed

    Brodén, Cyrus; Olivecrona, Henrik; Maguire, Gerald Q; Noz, Marilyn E; Zeleznik, Michael P; Sköldenberg, Olof

    2016-01-01

    Background and Purpose. The gold standard for detection of implant wear and migration is currently radiostereometry (RSA). The purpose of this study is to compare a three-dimensional computed tomography technique (3D CT) to standard RSA as an alternative technique for measuring migration of acetabular cups in total hip arthroplasty. Materials and Methods. With tantalum beads, we marked one cemented and one uncemented cup and mounted these on a similarly marked pelvic model. A comparison was made between 3D CT and standard RSA for measuring migration. Twelve repeated stereoradiographs and CT scans with double examinations in each position and gradual migration of the implants were made. Precision and accuracy of the 3D CT were calculated. Results. The accuracy of the 3D CT ranged between 0.07 and 0.32 mm for translations and 0.21 and 0.82° for rotation. The precision ranged between 0.01 and 0.09 mm for translations and 0.06 and 0.29° for rotations, respectively. For standard RSA, the precision ranged between 0.04 and 0.09 mm for translations and 0.08 and 0.32° for rotations, respectively. There was no significant difference in precision between 3D CT and standard RSA. The effective radiation dose of the 3D CT method, comparable to RSA, was estimated to be 0.33 mSv. Interpretation. Low dose 3D CT is a comparable method to standard RSA in an experimental setting.

  19. Real-time determination of laser beam quality by modal decomposition.

    PubMed

    Schmidt, Oliver A; Schulze, Christian; Flamm, Daniel; Brüning, Robert; Kaiser, Thomas; Schröter, Siegmund; Duparré, Michael

    2011-03-28

    We present a real-time method to determine the beam propagation ratio M2 of laser beams. The all-optical measurement of modal amplitudes yields M2 parameters conform to the ISO standard method. The experimental technique is simple and fast, which allows to investigate laser beams under conditions inaccessible to other methods.

  20. Design and Initial Characterization of the SC-200 Proteomics Standard Mixture

    PubMed Central

    Bauman, Andrew; Higdon, Roger; Rapson, Sean; Loiue, Brenton; Hogan, Jason; Stacy, Robin; Napuli, Alberto; Guo, Wenjin; van Voorhis, Wesley; Roach, Jared; Lu, Vincent; Landorf, Elizabeth; Stewart, Elizabeth; Kolker, Natali; Collart, Frank; Myler, Peter; van Belle, Gerald

    2011-01-01

    Abstract High-throughput (HTP) proteomics studies generate large amounts of data. Interpretation of these data requires effective approaches to distinguish noise from biological signal, particularly as instrument and computational capacity increase and studies become more complex. Resolving this issue requires validated and reproducible methods and models, which in turn requires complex experimental and computational standards. The absence of appropriate standards and data sets for validating experimental and computational workflows hinders the development of HTP proteomics methods. Most protein standards are simple mixtures of proteins or peptides, or undercharacterized reference standards in which the identity and concentration of the constituent proteins is unknown. The Seattle Children's 200 (SC-200) proposed proteomics standard mixture is the next step toward developing realistic, fully characterized HTP proteomics standards. The SC-200 exhibits a unique modular design to extend its functionality, and consists of 200 proteins of known identities and molar concentrations from 6 microbial genomes, distributed into 10 molar concentration tiers spanning a 1,000-fold range. We describe the SC-200's design, potential uses, and initial characterization. We identified 84% of SC-200 proteins with an LTQ-Orbitrap and 65% with an LTQ-Velos (false discovery rate = 1% for both). There were obvious trends in success rate, sequence coverage, and spectral counts with protein concentration; however, protein identification, sequence coverage, and spectral counts vary greatly within concentration levels. PMID:21250827

  1. Design and initial characterization of the SC-200 proteomics standard mixture.

    PubMed

    Bauman, Andrew; Higdon, Roger; Rapson, Sean; Loiue, Brenton; Hogan, Jason; Stacy, Robin; Napuli, Alberto; Guo, Wenjin; van Voorhis, Wesley; Roach, Jared; Lu, Vincent; Landorf, Elizabeth; Stewart, Elizabeth; Kolker, Natali; Collart, Frank; Myler, Peter; van Belle, Gerald; Kolker, Eugene

    2011-01-01

    High-throughput (HTP) proteomics studies generate large amounts of data. Interpretation of these data requires effective approaches to distinguish noise from biological signal, particularly as instrument and computational capacity increase and studies become more complex. Resolving this issue requires validated and reproducible methods and models, which in turn requires complex experimental and computational standards. The absence of appropriate standards and data sets for validating experimental and computational workflows hinders the development of HTP proteomics methods. Most protein standards are simple mixtures of proteins or peptides, or undercharacterized reference standards in which the identity and concentration of the constituent proteins is unknown. The Seattle Children's 200 (SC-200) proposed proteomics standard mixture is the next step toward developing realistic, fully characterized HTP proteomics standards. The SC-200 exhibits a unique modular design to extend its functionality, and consists of 200 proteins of known identities and molar concentrations from 6 microbial genomes, distributed into 10 molar concentration tiers spanning a 1,000-fold range. We describe the SC-200's design, potential uses, and initial characterization. We identified 84% of SC-200 proteins with an LTQ-Orbitrap and 65% with an LTQ-Velos (false discovery rate = 1% for both). There were obvious trends in success rate, sequence coverage, and spectral counts with protein concentration; however, protein identification, sequence coverage, and spectral counts vary greatly within concentration levels.

  2. The Peter Effect in Early Experimental Education Research

    ERIC Educational Resources Information Center

    Little, Joseph

    2003-01-01

    One of the signatures of scientific writing is its ability to present the claims of science as if they were "untouched by human hands." In the early years of experimental education, researchers achieved this by adopting a citational practice that led to the sedimentation of their cardinal method, the analysis of variance, and their standard for…

  3. Simulated BRDF based on measured surface topography of metal

    NASA Astrophysics Data System (ADS)

    Yang, Haiyue; Haist, Tobias; Gronle, Marc; Osten, Wolfgang

    2017-06-01

    The radiative reflective properties of a calibration standard rough surface were simulated by ray tracing and the Finite-difference time-domain (FDTD) method. The simulation results have been used to compute the reflectance distribution functions (BRDF) of metal surfaces and have been compared with experimental measurements. The experimental and simulated results are in good agreement.

  4. Uniform Sampling Table Method and its Applications II--Evaluating the Uniform Sampling by Experiment.

    PubMed

    Chen, Yibin; Chen, Jiaxi; Chen, Xuan; Wang, Min; Wang, Wei

    2015-01-01

    A new method of uniform sampling is evaluated in this paper. The items and indexes were adopted to evaluate the rationality of the uniform sampling. The evaluation items included convenience of operation, uniformity of sampling site distribution, and accuracy and precision of measured results. The evaluation indexes included operational complexity, occupation rate of sampling site in a row and column, relative accuracy of pill weight, and relative deviation of pill weight. They were obtained from three kinds of drugs with different shape and size by four kinds of sampling methods. Gray correlation analysis was adopted to make the comprehensive evaluation by comparing it with the standard method. The experimental results showed that the convenience of uniform sampling method was 1 (100%), odds ratio of occupation rate in a row and column was infinity, relative accuracy was 99.50-99.89%, reproducibility RSD was 0.45-0.89%, and weighted incidence degree exceeded the standard method. Hence, the uniform sampling method was easy to operate, and the selected samples were distributed uniformly. The experimental results demonstrated that the uniform sampling method has good accuracy and reproducibility, which can be put into use in drugs analysis.

  5. Effectiveness of Mind Mapping in English Teaching among VIII Standard Students

    ERIC Educational Resources Information Center

    Hallen, D.; Sangeetha, N.

    2015-01-01

    The aim of the study is to find out the effectiveness of mind mapping technique over conventional method in teaching English at high school level (VIII), in terms of Control and Experimental group. The sample of the study comprised, 60 VIII Standard students in Tiruchendur Taluk. Mind Maps and Achievement Test (Pretest & Posttest) were…

  6. A low noise and ultra-narrow bandwidth frequency-locked loop based on the beat method.

    PubMed

    Gao, Wei; Sui, Jianping; Chen, Zhiyong; Yu, Fang; Sheng, Rongwu

    2011-06-01

    A novel frequency-locked loop (FLL) based on the beat method is proposed in this paper. Compared with other frequency feedback loops, this FLL is a digital loop with simple structure and very low noise. As shown in the experimental results, this FLL can be used to reduce close-in phase noise on atomic frequency standards, through which a composite frequency standard with ultra-low phase noise and low cost can be easily realized.

  7. Abraham Trembley's strategy of generosity and the scope of celebrity in the mid-eighteenth century.

    PubMed

    Ratcliff, Marc J

    2004-12-01

    Historians of science have long believed that Abraham Trembley's celebrity and impact were attributable chiefly to the incredible regenerative phenomena demonstrated by the polyp, which he discovered in 1744, and to the new experimental method he devised to investigate them. This essay shows that experimental method alone cannot account for Trembley's success and influence; nor are the marvels of the polyp sufficient to explain its scientific and cultural impact. Experimental method was but one element in a new conception of the laboratory that called for both experimental and para-experimental skills whose public availability depended on a new style of communication. The strategy of generosity that led Trembley to dispatch polyps everywhere enabled experimental naturalist laboratories to spread throughout Europe, and the free circulation of living objects for scientific research led practitioners to establish an experimental field distinct from mechanical physics. Scholars reacted to the marvels of the polyp by strengthening the boundaries between the public and academic spheres and, in consequence, opened a space for new standards in both scientific work and the production of celebrity.

  8. The Performance of Methods to Test Upper-Level Mediation in the Presence of Nonnormal Data

    ERIC Educational Resources Information Center

    Pituch, Keenan A.; Stapleton, Laura M.

    2008-01-01

    A Monte Carlo study compared the statistical performance of standard and robust multilevel mediation analysis methods to test indirect effects for a cluster randomized experimental design under various departures from normality. The performance of these methods was examined for an upper-level mediation process, where the indirect effect is a fixed…

  9. The pointillism method for creating stimuli suitable for use in computer-based visual contrast sensitivity testing.

    PubMed

    Turner, Travis H

    2005-03-30

    An increasingly large corpus of clinical and experimental neuropsychological research has demonstrated the utility of measuring visual contrast sensitivity. Unfortunately, existing means of measuring contrast sensitivity can be prohibitively expensive, difficult to standardize, or lack reliability. Additionally, most existing tests do not allow full control over important characteristics, such as off-angle rotations, waveform, contrast, and spatial frequency. Ideally, researchers could manipulate characteristics and display stimuli in a computerized task designed to meet experimental needs. Thus far, 256-bit color limitation in standard cathode ray tube (CRT) monitors has been preclusive. To this end, the pointillism method (PM) was developed. Using MATLAB software, stimuli are created based on both mathematical and stochastic components, such that differences in regional luminance values of the gradient field closely approximate the desired contrast. This paper describes the method and examines its performance in sine and square-wave image sets from a range of contrast values. Results suggest the utility of the method for most experimental applications. Weaknesses in the current version, the need for validation and reliability studies, and considerations regarding applications are discussed. Syntax for the program is provided in an appendix, and a version of the program independent of MATLAB is available from the author.

  10. Testing for intracycle determinism in pseudoperiodic time series.

    PubMed

    Coelho, Mara C S; Mendes, Eduardo M A M; Aguirre, Luis A

    2008-06-01

    A determinism test is proposed based on the well-known method of the surrogate data. Assuming predictability to be a signature of determinism, the proposed method checks for intracycle (e.g., short-term) determinism in the pseudoperiodic time series for which standard methods of surrogate analysis do not apply. The approach presented is composed of two steps. First, the data are preprocessed to reduce the effects of seasonal and trend components. Second, standard tests of surrogate analysis can then be used. The determinism test is applied to simulated and experimental pseudoperiodic time series and the results show the applicability of the proposed test.

  11. High Performance Liquid Chromatography of Vitamin A: A Quantitative Determination.

    ERIC Educational Resources Information Center

    Bohman, Ove; And Others

    1982-01-01

    Experimental procedures are provided for the quantitative determination of Vitamin A (retinol) in food products by analytical liquid chromatography. Standard addition and calibration curve extraction methods are outlined. (SK)

  12. Standard plane localization in ultrasound by radial component model and selective search.

    PubMed

    Ni, Dong; Yang, Xin; Chen, Xin; Chin, Chien-Ting; Chen, Siping; Heng, Pheng Ann; Li, Shengli; Qin, Jing; Wang, Tianfu

    2014-11-01

    Acquisition of the standard plane is crucial for medical ultrasound diagnosis. However, this process requires substantial experience and a thorough knowledge of human anatomy. Therefore it is very challenging for novices and even time consuming for experienced examiners. We proposed a hierarchical, supervised learning framework for automatically detecting the standard plane from consecutive 2-D ultrasound images. We tested this technique by developing a system that localizes the fetal abdominal standard plane from ultrasound video by detecting three key anatomical structures: the stomach bubble, umbilical vein and spine. We first proposed a novel radial component-based model to describe the geometric constraints of these key anatomical structures. We then introduced a novel selective search method which exploits the vessel probability algorithm to produce probable locations for the spine and umbilical vein. Next, using component classifiers trained by random forests, we detected the key anatomical structures at their probable locations within the regions constrained by the radial component-based model. Finally, a second-level classifier combined the results from the component detection to identify an ultrasound image as either a "fetal abdominal standard plane" or a "non- fetal abdominal standard plane." Experimental results on 223 fetal abdomen videos showed that the detection accuracy of our method was as high as 85.6% and significantly outperformed both the full abdomen and the separate anatomy detection methods without geometric constraints. The experimental results demonstrated that our system shows great promise for application to clinical practice. Copyright © 2014 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  13. A Comparison of Experimental Functional Analysis and the Questions about Behavioral Function (QABF) in the Assessment of Challenging Behavior of Individuals with Autism

    ERIC Educational Resources Information Center

    Healy, Olive; Brett, Denise; Leader, Geraldine

    2013-01-01

    We compared two functional behavioral assessment methods: the Questions About Behavioral Function (QABF; a standardized test) and experimental functional analysis (EFA) to identify behavioral functions of aggressive/destructive behavior, self-injurious behavior and stereotypy in 32 people diagnosed with autism. Both assessments found that self…

  14. Review of research designs and statistical methods employed in dental postgraduate dissertations.

    PubMed

    Shirahatti, Ravi V; Hegde-Shetiya, Sahana

    2015-01-01

    There is a need to evaluate the quality of postgraduate dissertations of dentistry submitted to university in the light of the international standards of reporting. We conducted the review with an objective to document the use of sampling methods, measurement standardization, blinding, methods to eliminate bias, appropriate use of statistical tests, appropriate use of data presentation in postgraduate dental research and suggest and recommend modifications. The public access database of the dissertations from Rajiv Gandhi University of Health Sciences was reviewed. Three hundred and thirty-three eligible dissertations underwent preliminary evaluation followed by detailed evaluation of 10% of randomly selected dissertations. The dissertations were assessed based on international reporting guidelines such as strengthening the reporting of observational studies in epidemiology (STROBE), consolidated standards of reporting trials (CONSORT), and other scholarly resources. The data were compiled using MS Excel and SPSS 10.0. Numbers and percentages were used for describing the data. The "in vitro" studies were the most common type of research (39%), followed by observational (32%) and experimental studies (29%). The disciplines conservative dentistry (92%) and prosthodontics (75%) reported high numbers of in vitro research. Disciplines oral surgery (80%) and periodontics (67%) had conducted experimental studies as a major share of their research. Lacunae in the studies included observational studies not following random sampling (70%), experimental studies not following random allocation (75%), not mentioning about blinding, confounding variables and calibrations in measurements, misrepresenting the data by inappropriate data presentation, errors in reporting probability values and not reporting confidence intervals. Few studies showed grossly inappropriate choice of statistical tests and many studies needed additional tests. Overall observations indicated the need to comply with standard guidelines of reporting research.

  15. Comparison of competing segmentation standards for X-ray computed topographic imaging using Lattice Boltzmann techniques

    NASA Astrophysics Data System (ADS)

    Larsen, J. D.; Schaap, M. G.

    2013-12-01

    Recent advances in computing technology and experimental techniques have made it possible to observe and characterize fluid dynamics at the micro-scale. Many computational methods exist that can adequately simulate fluid flow in porous media. Lattice Boltzmann methods provide the distinct advantage of tracking particles at the microscopic level and returning macroscopic observations. While experimental methods can accurately measure macroscopic fluid dynamics, computational efforts can be used to predict and gain insight into fluid dynamics by utilizing thin sections or computed micro-tomography (CMT) images of core sections. Although substantial effort have been made to advance non-invasive imaging methods such as CMT, fluid dynamics simulations, and microscale analysis, a true three dimensional image segmentation technique has not been developed until recently. Many competing segmentation techniques are utilized in industry and research settings with varying results. In this study lattice Boltzmann method is used to simulate stokes flow in a macroporous soil column. Two dimensional CMT images were used to reconstruct a three dimensional representation of the original sample. Six competing segmentation standards were used to binarize the CMT volumes which provide distinction between solid phase and pore space. The permeability of the reconstructed samples was calculated, with Darcy's Law, from lattice Boltzmann simulations of fluid flow in the samples. We compare simulated permeability from differing segmentation algorithms to experimental findings.

  16. Mission Systems Open Architecture Science and Technology (MOAST) program

    NASA Astrophysics Data System (ADS)

    Littlejohn, Kenneth; Rajabian-Schwart, Vahid; Kovach, Nicholas; Satterthwaite, Charles P.

    2017-04-01

    The Mission Systems Open Architecture Science and Technology (MOAST) program is an AFRL effort that is developing and demonstrating Open System Architecture (OSA) component prototypes, along with methods and tools, to strategically evolve current OSA standards and technical approaches, promote affordable capability evolution, reduce integration risk, and address emerging challenges [1]. Within the context of open architectures, the program is conducting advanced research and concept development in the following areas: (1) Evolution of standards; (2) Cyber-Resiliency; (3) Emerging Concepts and Technologies; (4) Risk Reduction Studies and Experimentation; and (5) Advanced Technology Demonstrations. Current research includes the development of methods, tools, and techniques to characterize the performance of OMS data interconnection methods for representative mission system applications. Of particular interest are the OMS Critical Abstraction Layer (CAL), the Avionics Service Bus (ASB), and the Bulk Data Transfer interconnects, as well as to develop and demonstrate cybersecurity countermeasures techniques to detect and mitigate cyberattacks against open architecture based mission systems and ensure continued mission operations. Focus is on cybersecurity techniques that augment traditional cybersecurity controls and those currently defined within the Open Mission System and UCI standards. AFRL is also developing code generation tools and simulation tools to support evaluation and experimentation of OSA-compliant implementations.

  17. Primary standardization of 57Co.

    PubMed

    Koskinas, Marina F; Moreira, Denise S; Yamazaki, Ione M; de Toledo, Fábio; Brancaccio, Franco; Dias, Mauro S

    2010-01-01

    This work describes the method developed by the Nuclear Metrology Laboratory in IPEN, São Paulo, Brazil, for the standardization of a (57)Co radioactive solution. Cobalt-57 is a radionuclide used for calibrating gamma-ray and X-ray spectrometers, as well as a gamma reference source for dose calibrators used in nuclear medicine services. Two 4pibeta-gamma coincidence systems were used to perform the standardization, the first used a 4pi(PC) counter coupled to a pair of 76 mm x 76 mm NaI(Tl) scintillators for detecting gamma-rays, the other one used a HPGe spectrometer for gamma detection. The measurements were performed by selecting a gamma-ray window comprising the (122 keV+136 keV) total absorption energy peaks in the NaI(Tl) and selecting the total absorption peak of 122 keV in the germanium detector. The electronic system used the TAC method developed at LMN for registering the observed events. The methodology recently developed by the LMN for simulating all detection processes in a 4pibeta-gamma coincidence system, by means of the Monte Carlo technique, was applied and the behavior of extrapolation curve compared to experimental data. The final activity obtained by the Monte Carlo calculation agrees with the experimental results within the experimental uncertainty. Copyright 2009 Elsevier Ltd. All rights reserved.

  18. Advantages of modified osteosynthesis in treatment of osteoporotic long bones fractures--experimental model.

    PubMed

    Sisljagić, Vladimir; Jovanović, Savo; Mrcela, Tomislav; Radić, Radivoje; Belovari, Tatjana

    2009-12-01

    In surgery of fractured long bones, a patient suffering from osteoporosis represents constant challenge to a surgeon and applied material and instruments that need to destroy as little as possible of an already damaged bone. One potential way of increasing the contact surface between the implants and osteoporotic bone is injection of bone cement (methyl-metacrilat, Palakos) into a prepared screw bed. This method of osteosynthesis was therefore subjected to experimental research to prove that application of modified osteosynthesis using bone cement in treatment of fractures in osteoporotic patients has advantage over the standard method of osteosynthesis because this modified method enables significantly greater firmness and stability of the osteosynthesis, which is the essential precondition of a successful fracture healing. The research was carried out on six macerated cadaveric preparations of a shin bone from the osteological collection from Institute for Anatomy, School of Medicine, University "J. J. Strossmayer". All samples of long bones were artificially broken in the middle part of the diaphysis and then standard osteosynthesis and modified osteosynthesis with screws filled with bone cement were performed on the samples. Results show that under identical static action of the moment of torsion in the modified osteosynthesis torsion angle deviation is lower than in the standard osteosynthesis. In modified osteosynthesis with bone cement the first results for angle of torsion deviation greater than 0.2 degrees were noticed after 120 minutes, while in the standard method of osteosynthesis they were noticed already in the first minute.

  19. A novel sorbent based on carbon nanotube/amino-functionalized sol-gel for the headspace solid-phase microextraction of α-bisabolol from medicinal plant samples using experimental design.

    PubMed

    Yarazavi, Mina; Noroozian, Ebrahim

    2018-02-13

    A novel sol-gel coating on a stainless-steel fiber was developed for the first time for the headspace solid-phase microextraction and determination of α-bisabolol with gas chromatography and flame ionization detection. The parameters influencing the efficiency of solid-phase microextraction process, such as extraction time and temperature, pH, and ionic strength, were optimized by the experimental design method. Under optimized conditions, the linear range was between 0.0027 and 100 μg/mL. The relative standard deviations determined at 0.01 and 1.0 μg/mL concentration levels (n = 3), respectively, were as follows: intraday relative standard deviations 3.4 and 3.3%; interday relative standard deviations 5.0 and 4.3%; and fiber-to-fiber relative standard deviations 6.0 and 3.5%. The relative recovery values were 90.3 and 101.4% at 0.01 and 1.0 μg/mL spiking levels, respectively. The proposed method was successfully applied to various real samples containing α-bisabolol. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Defining standardized protocols for determining the efficacy of a postmilking teat disinfectant following experimental exposure of teats to mastitis pathogens.

    PubMed

    Schukken, Y H; Rauch, B J; Morelli, J

    2013-04-01

    The objective of this paper was to define standardized protocols for determining the efficacy of a postmilking teat disinfectant following experimental exposure of teats to both Staphylococcus aureus and Streptococcus agalactiae. The standardized protocols describe the selection of cows and herds and define the critical points in performing experimental exposure, performing bacterial culture, evaluating the culture results, and finally performing statistical analyses and reporting of the results. The protocols define both negative control and positive control trials. For negative control trials, the protocol states that an efficacy of reducing new intramammary infections (IMI) of at least 40% is required for a teat disinfectant to be considered effective. For positive control trials, noninferiority to a control disinfectant with a published efficacy of reducing new IMI of at least 70% is required. Sample sizes for both negative and positive control trials are calculated. Positive control trials are expected to require a large trial size. Statistical analysis methods are defined and, in the proposed methods, the rate of IMI may be analyzed using generalized linear mixed models. The efficacy of the test product can be evaluated while controlling for important covariates and confounders in the trial. Finally, standards for reporting are defined and reporting considerations are discussed. The use of the defined protocol is shown through presentation of the results of a recent trial of a test product against a negative control. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  1. Method and platform standardization in MRM-based quantitative plasma proteomics.

    PubMed

    Percy, Andrew J; Chambers, Andrew G; Yang, Juncong; Jackson, Angela M; Domanski, Dominik; Burkhart, Julia; Sickmann, Albert; Borchers, Christoph H

    2013-12-16

    There exists a growing demand in the proteomics community to standardize experimental methods and liquid chromatography-mass spectrometry (LC/MS) platforms in order to enable the acquisition of more precise and accurate quantitative data. This necessity is heightened by the evolving trend of verifying and validating candidate disease biomarkers in complex biofluids, such as blood plasma, through targeted multiple reaction monitoring (MRM)-based approaches with stable isotope-labeled standards (SIS). Considering the lack of performance standards for quantitative plasma proteomics, we previously developed two reference kits to evaluate the MRM with SIS peptide approach using undepleted and non-enriched human plasma. The first kit tests the effectiveness of the LC/MRM-MS platform (kit #1), while the second evaluates the performance of an entire analytical workflow (kit #2). Here, these kits have been refined for practical use and then evaluated through intra- and inter-laboratory testing on 6 common LC/MS platforms. For an identical panel of 22 plasma proteins, similar concentrations were determined, regardless of the kit, instrument platform, and laboratory of analysis. These results demonstrate the value of the kit and reinforce the utility of standardized methods and protocols. The proteomics community needs standardized experimental protocols and quality control methods in order to improve the reproducibility of MS-based quantitative data. This need is heightened by the evolving trend for MRM-based validation of proposed disease biomarkers in complex biofluids such as blood plasma. We have developed two kits to assist in the inter- and intra-laboratory quality control of MRM experiments: the first kit tests the effectiveness of the LC/MRM-MS platform (kit #1), while the second evaluates the performance of an entire analytical workflow (kit #2). In this paper, we report the use of these kits in intra- and inter-laboratory testing on 6 common LC/MS platforms. This article is part of a Special Issue entitled: Standardization and Quality Control in Proteomics. © 2013.

  2. Contact resistance extraction methods for short- and long-channel carbon nanotube field-effect transistors

    NASA Astrophysics Data System (ADS)

    Pacheco-Sanchez, Anibal; Claus, Martin; Mothes, Sven; Schröter, Michael

    2016-11-01

    Three different methods for the extraction of the contact resistance based on both the well-known transfer length method (TLM) and two variants of the Y-function method have been applied to simulation and experimental data of short- and long-channel CNTFETs. While for TLM special CNT test structures are mandatory, standard electrical device characteristics are sufficient for the Y-function methods. The methods have been applied to CNTFETs with low and high channel resistance. It turned out that the standard Y-function method fails to deliver the correct contact resistance in case of a relatively high channel resistance compared to the contact resistances. A physics-based validation is also given for the application of these methods based on applying traditional Si MOSFET theory to quasi-ballistic CNTFETs.

  3. Theoretical and experimental prediction of the redox potentials of metallocene compounds

    NASA Astrophysics Data System (ADS)

    Li, Ya-Ping; Liu, Hai-Bo; Liu, Tao; Yu, Zhang-Yu

    2017-11-01

    The standard redox electrode potential ( E°) values of metallocene compounds are obtained theoretically with density functional theory (DFT) method at B3LYP/6-311++G( d, p) level and experimentally with cyclic voltammetry (CV). The theoretical E° values of metallocene compounds are in good agreement with experimental ones. We investigate the substituent effects on the redox properties of metallocene compounds. Among the four metallocene compounds, the E° values is largest for titanocene dichloride and smallest for ferrocene.

  4. Development of a Contact Permeation Test Fixture and Method

    DTIC Science & Technology

    2013-04-01

    direct contact with the skin, indicates the need for a quantitative contact test method. Comparison tests were conducted with VX on a standardized...Guide for the Care and Use of Laboratory Animals (8th ed.; National Research Council: Washington, DC, 2011). This test was also performed in...1 1.2 Development of a Contact-Based Permeation Test Method ........................................ 1 2. EXPERIMENTAL PROCEDURES

  5. The role of explicit and implicit standards in visual speed discrimination.

    PubMed

    Norman, J Farley; Pattison, Kristina F; Norman, Hideko F; Craft, Amy E; Wiesemann, Elizabeth Y; Taylor, M Jett

    2008-01-01

    Five experiments were designed to investigate visual speed discrimination. Variations of the method of constant stimuli were used to obtain speed discrimination thresholds in experiments 1, 2, 4, and 5, while the method of single stimuli was used in experiment 3. The observers' thresholds were significantly influenced by the choice of psychophysical method and by changes in the standard speed. The observers' judgments were unaffected, however, by changes in the magnitude of random variations in stimulus duration, reinforcing the conclusions of Lappin et al (1975 Journal of Experimental Psychology: Human Perception and Performance 1 383 394). When an implicit standard was used, the observers produced relatively low discrimination thresholds (7.0% of the standard speed), verifying the results of McKee (1981 Vision Research 21 491-500). When an explicit standard was used in a 2AFC variant of the method of constant stimuli, however, the observers' discrimination thresholds increased by 74% (to 12.2%), resembling the high thresholds obtained by Mandriota et al (1962 Science 138 437-438). A subsequent signal-detection analysis revealed that the observers' actual sensitivities to differences in speed were in fact equivalent for both psychophysical methods. The formation of an implicit standard in the method of single stimuli allows human observers to make judgments of speed that are as precise as those obtained when explicit standards are available.

  6. Evaluating the interaction of a tracheobronchial stent in an ovine in-vivo model.

    PubMed

    McGrath, Donnacha J; Thiebes, Anja Lena; Cornelissen, Christian G; O'Brien, Barry; Jockenhoevel, Stefan; Bruzzi, Mark; McHugh, Peter E

    2018-04-01

    Tracheobronchial stents are used to restore patency to stenosed airways. However, these devices are associated with many complications such as stent migration, granulation tissue formation, mucous plugging and stent strut fracture. Of these, granulation tissue formation is the complication that most frequently requires costly secondary interventions. In this study a biomechanical lung modelling framework recently developed by the authors to capture the lung in-vivo stress state under physiological loading is employed in conjunction with ovine pre-clinical stenting results and device experimental data to evaluate the effect of stent interaction on granulation tissue formation. Stenting is simulated using a validated model of a prototype covered laser-cut tracheobronchial stent in a semi-specific biomechanical lung model, and physiological loading is performed. Two computational methods are then used to predict possible granulation tissue formation: the standard method which utilises the increase in maximum principal stress change, and a newly proposed method which compares the change in contact pressure over a respiratory cycle. These computational predictions of granulation tissue formation are then compared to pre-clinical stenting observations after a 6-week implantation period. Experimental results of the pre-clinical stent implantation showed signs of granulation tissue formation both proximally and distally, with a greater proximal reaction. The standard method failed to show a correlation with the experimental results. However, the contact change method showed an apparent correlation with granulation tissue formation. These results suggest that this new method could be used as a tool to improve future device designs.

  7. High-throughput Screening of Recalcitrance Variations in Lignocellulosic Biomass: Total Lignin, Lignin Monomers, and Enzymatic Sugar Release

    PubMed Central

    Decker, Stephen R.; Sykes, Robert W.; Turner, Geoffrey B.; Lupoi, Jason S.; Doepkke, Crissa; Tucker, Melvin P.; Schuster, Logan A.; Mazza, Kimberly; Himmel, Michael E.; Davis, Mark F.; Gjersing, Erica

    2015-01-01

    The conversion of lignocellulosic biomass to fuels, chemicals, and other commodities has been explored as one possible pathway toward reductions in the use of non-renewable energy sources. In order to identify which plants, out of a diverse pool, have the desired chemical traits for downstream applications, attributes, such as cellulose and lignin content, or monomeric sugar release following an enzymatic saccharification, must be compared. The experimental and data analysis protocols of the standard methods of analysis can be time-consuming, thereby limiting the number of samples that can be measured. High-throughput (HTP) methods alleviate the shortcomings of the standard methods, and permit the rapid screening of available samples to isolate those possessing the desired traits. This study illustrates the HTP sugar release and pyrolysis-molecular beam mass spectrometry pipelines employed at the National Renewable Energy Lab. These pipelines have enabled the efficient assessment of thousands of plants while decreasing experimental time and costs through reductions in labor and consumables. PMID:26437006

  8. A straightforward experimental method to evaluate the Lamb-Mössbauer factor of a 57Co/Rh source

    NASA Astrophysics Data System (ADS)

    Spina, G.; Lantieri, M.

    2014-01-01

    In analyzing Mössbauer spectra by means of the integral transmission function, a correct evaluation of the recoilless fs factor of the source at the position of the sample is needed. A novel method to evaluate fs for a 57Co source is proposed. The method uses the standard transmission experimental set up and it does not need further measurements but the ones that are mandatory in order to center the Mössbauer line and to calibrate the Mössbauer transducer. Firstly, the background counts are evaluated by collecting a standard Multi Channel Scaling (MCS) spectrum of a tick metal iron foil absorber and two Pulse Height Analysis (PHA) spectra with the same life-time and setting the maximum velocity of the transducer at the same value of the MCS spectrum. Secondly, fs is evaluated by fitting the collected MCS spectrum throughout the integral transmission approach. A test of the suitability of the technique is presented, too.

  9. High-Throughput Screening of Recalcitrance Variations in Lignocellulosic Biomass: Total Lignin, Lignin Monomers, and Enzymatic Sugar Release

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Decker, Stephen R.; Sykes, Robert W.; Turner, Geoffrey B.

    The conversion of lignocellulosic biomass to fuels, chemicals, and other commodities has been explored as one possible pathway toward reductions in the use of non-renewable energy sources. In order to identify which plants, out of a diverse pool, have the desired chemical traits for downstream applications, attributes, such as cellulose and lignin content, or monomeric sugar release following an enzymatic saccharification, must be compared. The experimental and data analysis protocols of the standard methods of analysis can be time-consuming, thereby limiting the number of samples that can be measured. High-throughput (HTP) methods alleviate the shortcomings of the standard methods, andmore » permit the rapid screening of available samples to isolate those possessing the desired traits. This study illustrates the HTP sugar release and pyrolysis-molecular beam mass spectrometry pipelines employed at the National Renewable Energy Lab. These pipelines have enabled the efficient assessment of thousands of plants while decreasing experimental time and costs through reductions in labor and consumables.« less

  10. Embellishment of Student Leadership in Learning Multiplication at Primary Level

    ERIC Educational Resources Information Center

    Singaravelu, G.

    2006-01-01

    The present study enlightens the efficacy of Student Leadership method in learning Multiplication in Mathematics at primary level. Single group experimental method was adopted for the study. Forty learners studying in Standard III in Panchayat union primary School, Muthupettai in South Tamil Nadu, India have been selected as sample for the study.…

  11. Revisiting the Scale-Invariant, Two-Dimensional Linear Regression Method

    ERIC Educational Resources Information Center

    Patzer, A. Beate C.; Bauer, Hans; Chang, Christian; Bolte, Jan; Su¨lzle, Detlev

    2018-01-01

    The scale-invariant way to analyze two-dimensional experimental and theoretical data with statistical errors in both the independent and dependent variables is revisited by using what we call the triangular linear regression method. This is compared to the standard least-squares fit approach by applying it to typical simple sets of example data…

  12. Simple method for the generation of multiple homogeneous field volumes inside the bore of superconducting magnets.

    PubMed

    Chou, Ching-Yu; Ferrage, Fabien; Aubert, Guy; Sakellariou, Dimitris

    2015-07-17

    Standard Magnetic Resonance magnets produce a single homogeneous field volume, where the analysis is performed. Nonetheless, several modern applications could benefit from the generation of multiple homogeneous field volumes along the axis and inside the bore of the magnet. In this communication, we propose a straightforward method using a combination of ring structures of permanent magnets in order to cancel the gradient of the stray field in a series of distinct volumes. These concepts were demonstrated numerically on an experimentally measured magnetic field profile. We discuss advantages and limitations of our method and present the key steps required for an experimental validation.

  13. Using an Experimental Evaluation of Charter Schools to Test Whether Nonexperimental Comparison Group Methods Can Replicate Experimental Impact Estimates. NCEE 2012-4019

    ERIC Educational Resources Information Center

    Fortson, Kenneth; Verbitsky-Savitz, Natalya; Kopa, Emma; Gleason, Philip

    2012-01-01

    Randomized controlled trials (RCTs) are widely considered to be the gold standard in evaluating the impacts of a social program. When an RCT is infeasible, researchers often estimate program impacts by comparing outcomes of program participants with those of a nonexperimental comparison group, adjusting for observable differences between the two…

  14. An Experimental Study of the Effect of Judges' Knowledge of Item Data on Two Forms of the Angoff Standard Setting Method.

    ERIC Educational Resources Information Center

    Garrido, Mariquita; Payne, David A.

    Minimum competency cut-off scores on a statistics exam were estimated under four conditions: the Angoff judging method with item data (n=20), and without data available (n=19); and the Modified Angoff method with (n=19), and without (n=19) item data available to judges. The Angoff method required free response percentage estimates (0-100) percent,…

  15. Mutation-based learning to improve student autonomy and scientific inquiry skills in a large genetics laboratory course.

    PubMed

    Wu, Jinlu

    2013-01-01

    Laboratory education can play a vital role in developing a learner's autonomy and scientific inquiry skills. In an innovative, mutation-based learning (MBL) approach, students were instructed to redesign a teacher-designed standard experimental protocol by a "mutation" method in a molecular genetics laboratory course. Students could choose to delete, add, reverse, or replace certain steps of the standard protocol to explore questions of interest to them in a given experimental scenario. They wrote experimental proposals to address their rationales and hypotheses for the "mutations"; conducted experiments in parallel, according to both standard and mutated protocols; and then compared and analyzed results to write individual lab reports. Various autonomy-supportive measures were provided in the entire experimental process. Analyses of student work and feedback suggest that students using the MBL approach 1) spend more time discussing experiments, 2) use more scientific inquiry skills, and 3) find the increased autonomy afforded by MBL more enjoyable than do students following regimented instructions in a conventional "cookbook"-style laboratory. Furthermore, the MBL approach does not incur an obvious increase in labor and financial costs, which makes it feasible for easy adaptation and implementation in a large class.

  16. Influence of undergraduate nursing student teaching methods on learning standard precautions and transmission-based precautions: Experimental research.

    PubMed

    Kappes Ramirez, Maria Soledad

    2018-02-01

    An experimental study was performed with undergraduate nursing students in order to determine, between two methodologies, which is the best for learning standard precautions and precautions based on disease transmission mechanisms. Students in the sample are stratified by performance, with the experimental group (49 students) being exposed to self-instruction and clinical simulation on the topic of standard precautions and special precautions according to disease transmission mechanisms. Conventional classes on the same topics were provided to the control group (49 students). The experimental group showed the best performance in the multiple-choice post-test of knowledge (p=0.002) and in the assessment of essay questions (p=0.043), as well as in the evaluation of a simulated scenario, in relation to the control group. This study demonstrates that it is possible to transfer some teaching subjects on the prevention of Healthcare Associated Infections (HAIs) to self-learning by means of virtual teaching strategies with good results. This allows greater efficiency in the allocation of teachers to clinical simulation or learning situations in the laboratory, where students can apply what they have learned in the self-instruction module. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Implementation of a standardized out-of-hospital management method for Parkinson dysphagia.

    PubMed

    Wei, Hongying; Sun, Dongxiu; Liu, Meiping

    2017-12-01

    Our objective is to explore the effectiveness and feasibility of establishing a swallowing management clinic to implement out-of-hospital management for Parkinson disease (PD) patients with dysphagia. Two-hundred seventeen (217) voluntary PD patients with dysphagia in a PD outpatient clinic were divided into a control group with 100 people, and an experimental group with 117 people. The control group was given dysphagia rehabilitation guidance. The experimental group was presented with the standardized out-of-hospital management method as overall management and information and education materials. Rehabilitation efficiency and incidence rate of dysphagia, as well as relevant complications of both groups were compared after a 6-month intervention. Rehabilitation efficiency and the incidence rate of dysphagia including relevant complications of patients treated with the standardized out-of-hospital management were compared with those seen in the control group. The differences have distinct statistics meaning (p<0.01). Establishing a swallowing management protocol for outpatient setting can effectively help the recovery of the function of swallowing, reduce the incidence rate of dysphagia complications and improve the quality of life in patients with PD.

  18. Compressed Sensing Quantum Process Tomography for Superconducting Quantum Gates

    NASA Astrophysics Data System (ADS)

    Rodionov, Andrey

    An important challenge in quantum information science and quantum computing is the experimental realization of high-fidelity quantum operations on multi-qubit systems. Quantum process tomography (QPT) is a procedure devised to fully characterize a quantum operation. We first present the results of the estimation of the process matrix for superconducting multi-qubit quantum gates using the full data set employing various methods: linear inversion, maximum likelihood, and least-squares. To alleviate the problem of exponential resource scaling needed to characterize a multi-qubit system, we next investigate a compressed sensing (CS) method for QPT of two-qubit and three-qubit quantum gates. Using experimental data for two-qubit controlled-Z gates, taken with both Xmon and superconducting phase qubits, we obtain estimates for the process matrices with reasonably high fidelities compared to full QPT, despite using significantly reduced sets of initial states and measurement configurations. We show that the CS method still works when the amount of data is so small that the standard QPT would have an underdetermined system of equations. We also apply the CS method to the analysis of the three-qubit Toffoli gate with simulated noise, and similarly show that the method works well for a substantially reduced set of data. For the CS calculations we use two different bases in which the process matrix is approximately sparse (the Pauli-error basis and the singular value decomposition basis), and show that the resulting estimates of the process matrices match with reasonably high fidelity. For both two-qubit and three-qubit gates, we characterize the quantum process by its process matrix and average state fidelity, as well as by the corresponding standard deviation defined via the variation of the state fidelity for different initial states. We calculate the standard deviation of the average state fidelity both analytically and numerically, using a Monte Carlo method. Overall, we show that CS QPT offers a significant reduction in the needed amount of experimental data for two-qubit and three-qubit quantum gates.

  19. Results of the CCRI(II)-S12.H-3 supplementary comparison: Comparison of methods for the calculation of the activity and standard uncertainty of a tritiated-water source measured using the LSC-TDCR method.

    PubMed

    Cassette, Philippe; Altzitzoglou, Timotheos; Antohe, Andrei; Rossi, Mario; Arinc, Arzu; Capogni, Marco; Galea, Raphael; Gudelis, Arunas; Kossert, Karsten; Lee, K B; Liang, Juncheng; Nedjadi, Youcef; Oropesa Verdecia, Pilar; Shilnikova, Tanya; van Wyngaardt, Winifred; Ziemek, Tomasz; Zimmerman, Brian

    2018-04-01

    A comparison of calculations of the activity of a 3 H 2 O liquid scintillation source using the same experimental data set collected at the LNE-LNHB with a triple-to-double coincidence ratio (TDCR) counter was completed. A total of 17 laboratories calculated the activity and standard uncertainty of the LS source using the files with experimental data provided by the LNE-LNHB. The results as well as relevant information on the computation techniques are presented and analysed in this paper. All results are compatible, even if there is a significant dispersion between the reported uncertainties. An output of this comparison is the estimation of the dispersion of TDCR measurement results when measurement conditions are well defined. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Novel methods to estimate the enantiomeric ratio and the kinetic parameters of enantiospecific enzymatic reactions.

    PubMed

    Machado, G D.C.; Paiva, L M.C.; Pinto, G F.; Oestreicher, E G.

    2001-03-08

    1The Enantiomeric Ratio (E) of the enzyme, acting as specific catalysts in resolution of enantiomers, is an important parameter in the quantitative description of these chiral resolution processes. In the present work, two novel methods hereby called Method I and II, for estimating E and the kinetic parameters Km and Vm of enantiomers were developed. These methods are based upon initial rate (v) measurements using different concentrations of enantiomeric mixtures (C) with several molar fractions of the substrate (x). Both methods were tested using simulated "experimental data" and actual experimental data. Method I is easier to use than Method II but requires that one of the enantiomers is available in pure form. Method II, besides not requiring the enantiomers in pure form shown better results, as indicated by the magnitude of the standard errors of estimates. The theoretical predictions were experimentally confirmed by using the oxidation of 2-butanol and 2-pentanol catalyzed by Thermoanaerobium brockii alcohol dehydrogenase as reaction models. The parameters E, Km and Vm were estimated by Methods I and II with precision and were not significantly different from those obtained experimentally by direct estimation of E from the kinetic parameters of each enantiomer available in pure form.

  1. Improved Methods for Capture, Extraction, and Quantitative Assay of Environmental DNA from Asian Bigheaded Carp (Hypophthalmichthys spp.)

    PubMed Central

    Turner, Cameron R.; Miller, Derryl J.; Coyne, Kathryn J.; Corush, Joel

    2014-01-01

    Indirect, non-invasive detection of rare aquatic macrofauna using aqueous environmental DNA (eDNA) is a relatively new approach to population and biodiversity monitoring. As such, the sensitivity of monitoring results to different methods of eDNA capture, extraction, and detection is being investigated in many ecosystems and species. One of the first and largest conservation programs with eDNA-based monitoring as a central instrument focuses on Asian bigheaded carp (Hypophthalmichthys spp.), an invasive fish spreading toward the Laurentian Great Lakes. However, the standard eDNA methods of this program have not advanced since their development in 2010. We developed new, quantitative, and more cost-effective methods and tested them against the standard protocols. In laboratory testing, our new quantitative PCR (qPCR) assay for bigheaded carp eDNA was one to two orders of magnitude more sensitive than the existing endpoint PCR assays. When applied to eDNA samples from an experimental pond containing bigheaded carp, the qPCR assay produced a detection probability of 94.8% compared to 4.2% for the endpoint PCR assays. Also, the eDNA capture and extraction method we adapted from aquatic microbiology yielded five times more bigheaded carp eDNA from the experimental pond than the standard method, at a per sample cost over forty times lower. Our new, more sensitive assay provides a quantitative tool for eDNA-based monitoring of bigheaded carp, and the higher-yielding eDNA capture and extraction method we describe can be used for eDNA-based monitoring of any aquatic species. PMID:25474207

  2. Improved methods for capture, extraction, and quantitative assay of environmental DNA from Asian bigheaded carp (Hypophthalmichthys spp.).

    PubMed

    Turner, Cameron R; Miller, Derryl J; Coyne, Kathryn J; Corush, Joel

    2014-01-01

    Indirect, non-invasive detection of rare aquatic macrofauna using aqueous environmental DNA (eDNA) is a relatively new approach to population and biodiversity monitoring. As such, the sensitivity of monitoring results to different methods of eDNA capture, extraction, and detection is being investigated in many ecosystems and species. One of the first and largest conservation programs with eDNA-based monitoring as a central instrument focuses on Asian bigheaded carp (Hypophthalmichthys spp.), an invasive fish spreading toward the Laurentian Great Lakes. However, the standard eDNA methods of this program have not advanced since their development in 2010. We developed new, quantitative, and more cost-effective methods and tested them against the standard protocols. In laboratory testing, our new quantitative PCR (qPCR) assay for bigheaded carp eDNA was one to two orders of magnitude more sensitive than the existing endpoint PCR assays. When applied to eDNA samples from an experimental pond containing bigheaded carp, the qPCR assay produced a detection probability of 94.8% compared to 4.2% for the endpoint PCR assays. Also, the eDNA capture and extraction method we adapted from aquatic microbiology yielded five times more bigheaded carp eDNA from the experimental pond than the standard method, at a per sample cost over forty times lower. Our new, more sensitive assay provides a quantitative tool for eDNA-based monitoring of bigheaded carp, and the higher-yielding eDNA capture and extraction method we describe can be used for eDNA-based monitoring of any aquatic species.

  3. Aspects of numerical and representational methods related to the finite-difference simulation of advective and dispersive transport of freshwater in a thin brackish aquifer

    USGS Publications Warehouse

    Merritt, M.L.

    1993-01-01

    The simulation of the transport of injected freshwater in a thin brackish aquifer, overlain and underlain by confining layers containing more saline water, is shown to be influenced by the choice of the finite-difference approximation method, the algorithm for representing vertical advective and dispersive fluxes, and the values assigned to parametric coefficients that specify the degree of vertical dispersion and molecular diffusion that occurs. Computed potable water recovery efficiencies will differ depending upon the choice of algorithm and approximation method, as will dispersion coefficients estimated based on the calibration of simulations to match measured data. A comparison of centered and backward finite-difference approximation methods shows that substantially different transition zones between injected and native waters are depicted by the different methods, and computed recovery efficiencies vary greatly. Standard and experimental algorithms and a variety of values for molecular diffusivity, transverse dispersivity, and vertical scaling factor were compared in simulations of freshwater storage in a thin brackish aquifer. Computed recovery efficiencies vary considerably, and appreciable differences are observed in the distribution of injected freshwater in the various cases tested. The results demonstrate both a qualitatively different description of transport using the experimental algorithms and the interrelated influences of molecular diffusion and transverse dispersion on simulated recovery efficiency. When simulating natural aquifer flow in cross-section, flushing of the aquifer occurred for all tested coefficient choices using both standard and experimental algorithms. ?? 1993.

  4. Improving Pharmacy Student Communication Outcomes Using Standardized Patients.

    PubMed

    Gillette, Chris; Rudolph, Michael; Rockich-Winston, Nicole; Stanton, Robert; Anderson, H Glenn

    2017-08-01

    Objective. To examine whether standardized patient encounters led to an improvement in a student pharmacist-patient communication assessment compared to traditional active-learning activities within a classroom setting. Methods. A quasi-experimental study was conducted with second-year pharmacy students in a drug information and communication skills course. Student patient communication skills were assessed using high-stakes communication assessment. Results. Two hundred and twenty students' data were included. Students were significantly more likely to have higher scores on the communication assessment when they had higher undergraduate GPAs, were female, and taught using standardized patients. Similarly, students were significantly more likely to pass the assessment on the first attempt when they were female and when they were taught using standardized patients. Conclusion. Incorporating standardized patients within a communication course resulted in improved scores as well as first-time pass rates on a communication assessment than when using different methods of active learning.

  5. Thermodynamics of enzyme-catalyzed esterifications: II. Levulinic acid esterification with short-chain alcohols.

    PubMed

    Altuntepe, Emrah; Emel'yanenko, Vladimir N; Forster-Rotgers, Maximilian; Sadowski, Gabriele; Verevkin, Sergey P; Held, Christoph

    2017-10-01

    Levulinic acid was esterified with methanol, ethanol, and 1-butanol with the final goal to predict the maximum yield of these equilibrium-limited reactions as function of medium composition. In a first step, standard reaction data (standard Gibbs energy of reaction Δ R g 0 ) were determined from experimental formation properties. Unexpectedly, these Δ R g 0 values strongly deviated from data obtained with classical group contribution methods that are typically used if experimental standard data is not available. In a second step, reaction equilibrium concentrations obtained from esterification catalyzed by Novozym 435 at 323.15 K were measured, and the corresponding activity coefficients of the reacting agents were predicted with perturbed-chain statistical associating fluid theory (PC-SAFT). The so-obtained thermodynamic activities were used to determine Δ R g 0 at 323.15 K. These results could be used to cross-validate Δ R g 0 from experimental formation data. In a third step, reaction-equilibrium experiments showed that equilibrium position of the reactions under consideration depends strongly on the concentration of water and on the ratio of levulinic acid: alcohol in the initial reaction mixtures. The maximum yield of the esters was calculated using Δ R g 0 data from this work and activity coefficients of the reacting agents predicted with PC-SAFT for varying feed composition of the reaction mixtures. The use of the new Δ R g 0 data combined with PC-SAFT allowed good agreement to the measured yields, while predictions based on Δ R g 0 values obtained with group contribution methods showed high deviations to experimental yields.

  6. Application of Box-Behnken experimental design to optimize the extraction of insecticidal Cry1Ac from soil.

    PubMed

    Li, Yan-Liang; Fang, Zhi-Xiang; You, Jing

    2013-02-20

    A validated method for analyzing Cry proteins is a premise to study the fate and ecological effects of contaminants associated with genetically engineered Bacillus thuringiensis crops. The current study has optimized the extraction method to analyze Cry1Ac protein in soil using a response surface methodology with a three-level-three-factor Box-Behnken experimental design (BBD). The optimum extraction conditions were at 21 °C and 630 rpm for 2 h. Regression analysis showed a good fit of the experimental data to the second-order polynomial model with a coefficient of determination of 0.96. The method was sensitive and precise with a method detection limit of 0.8 ng/g dry weight and relative standard deviations at 7.3%. Finally, the established method was applied for analyzing Cry1Ac protein residues in field-collected soil samples. Trace amounts of Cry1Ac protein were detected in the soils where transgenic crops have been planted for 8 and 12 years.

  7. Estimating structure quality trends in the Protein Data Bank by equivalent resolution.

    PubMed

    Bagaria, Anurag; Jaravine, Victor; Güntert, Peter

    2013-10-01

    The quality of protein structures obtained by different experimental and ab-initio calculation methods varies considerably. The methods have been evolving over time by improving both experimental designs and computational techniques, and since the primary aim of these developments is the procurement of reliable and high-quality data, better techniques resulted on average in an evolution toward higher quality structures in the Protein Data Bank (PDB). Each method leaves a specific quantitative and qualitative "trace" in the PDB entry. Certain information relevant to one method (e.g. dynamics for NMR) may be lacking for another method. Furthermore, some standard measures of quality for one method cannot be calculated for other experimental methods, e.g. crystal resolution or NMR bundle RMSD. Consequently, structures are classified in the PDB by the method used. Here we introduce a method to estimate a measure of equivalent X-ray resolution (e-resolution), expressed in units of Å, to assess the quality of any type of monomeric, single-chain protein structure, irrespective of the experimental structure determination method. We showed and compared the trends in the quality of structures in the Protein Data Bank over the last two decades for five different experimental techniques, excluding theoretical structure predictions. We observed that as new methods are introduced, they undergo a rapid method development evolution: within several years the e-resolution score becomes similar for structures obtained from the five methods and they improve from initially poor performance to acceptable quality, comparable with previously established methods, the performance of which is essentially stable. Copyright © 2013 Elsevier Ltd. All rights reserved.

  8. Qualitative Experimentation, Local Generalizability, and Other Oxymoronic Opportunities for Educated Researchers

    ERIC Educational Resources Information Center

    Brooks, Gordon P.

    2011-01-01

    As lines between research paradigms continue to blur with the ever-increasing popularity of mixed methods research, there are useful, and occasionally oxymoronic, opportunities for educational researchers to juxtapose tools from opposing methods. The gold standard is just not possible in so much of what we do with small-scale research, nor is it…

  9. 2D-DIGE in Proteomics.

    PubMed

    Pasquali, Matias; Serchi, Tommaso; Planchon, Sebastien; Renaut, Jenny

    2017-01-01

    The two-dimensional difference gel electrophoresis method is a valuable approach for proteomics. The method, using cyanine fluorescent dyes, allows the co-migration of multiple protein samples in the same gel and their simultaneous detection, thus reducing experimental and analytical time. 2D-DIGE, compared to traditional post-staining 2D-PAGE protocols (e.g., colloidal Coomassie or silver nitrate), provides faster and more reliable gel matching, limiting the impact of gel to gel variation, and allows also a good dynamic range for quantitative comparisons. By the use of internal standards, it is possible to normalize for experimental variations in spot intensities and gel patterns. Here we describe the experimental steps we follow in our routine 2D-DIGE procedure that we then apply to multiple biological questions.

  10. Melanins and melanogenesis: methods, standards, protocols.

    PubMed

    d'Ischia, Marco; Wakamatsu, Kazumasa; Napolitano, Alessandra; Briganti, Stefania; Garcia-Borron, José-Carlos; Kovacs, Daniela; Meredith, Paul; Pezzella, Alessandro; Picardo, Mauro; Sarna, Tadeusz; Simon, John D; Ito, Shosuke

    2013-09-01

    Despite considerable advances in the past decade, melanin research still suffers from the lack of universally accepted and shared nomenclature, methodologies, and structural models. This paper stems from the joint efforts of chemists, biochemists, physicists, biologists, and physicians with recognized and consolidated expertise in the field of melanins and melanogenesis, who critically reviewed and experimentally revisited methods, standards, and protocols to provide for the first time a consensus set of recommended procedures to be adopted and shared by researchers involved in pigment cell research. The aim of the paper was to define an unprecedented frame of reference built on cutting-edge knowledge and state-of-the-art methodology, to enable reliable comparison of results among laboratories and new progress in the field based on standardized methods and shared information. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  11. Simple method for the generation of multiple homogeneous field volumes inside the bore of superconducting magnets

    PubMed Central

    Chou, Ching-Yu; Ferrage, Fabien; Aubert, Guy; Sakellariou, Dimitris

    2015-01-01

    Standard Magnetic Resonance magnets produce a single homogeneous field volume, where the analysis is performed. Nonetheless, several modern applications could benefit from the generation of multiple homogeneous field volumes along the axis and inside the bore of the magnet. In this communication, we propose a straightforward method using a combination of ring structures of permanent magnets in order to cancel the gradient of the stray field in a series of distinct volumes. These concepts were demonstrated numerically on an experimentally measured magnetic field profile. We discuss advantages and limitations of our method and present the key steps required for an experimental validation. PMID:26182891

  12. Virtual reality and the traditional method for phlebotomy training among college of nursing students in Kuwait: implications for nursing education and practice.

    PubMed

    Vidal, Victoria L; Ohaeri, Beatrice M; John, Pamela; Helen, Delles

    2013-01-01

    This quasi-experimental study, with a control group and experimental group, compares the effectiveness of virtual reality simulators on developing phlebotomy skills of nursing students with the effectiveness of traditional methods of teaching. Performance of actual phlebotomy on a live client was assessed after training, using a standardized form. Findings showed that students who were exposed to the virtual reality simulator performed better in the following performance metrics: pain factor, hematoma formation, and number of reinsertions. This study confirms that the use of the virtual reality-based system to supplement the traditional method may be the optimal program for training.

  13. New method for stock-tank oil compositional analysis.

    PubMed

    McAndrews, Kristine; Nighswander, John; Kotzakoulakis, Konstantin; Ross, Paul; Schroeder, Helmut

    2009-01-01

    A new method for accurately determining stock-tank oil composition to normal pentatriacontane using gas chromatography is developed and validated. The new method addresses the potential errors associated with the traditional equipment and technique employed for extended hydrocarbon gas chromatography outside a controlled laboratory environment, such as on an offshore oil platform. In particular, the experimental measurement of stock-tank oil molecular weight with the freezing point depression technique and the use of an internal standard to find the unrecovered sample fraction are replaced with correlations for estimating these properties. The use of correlations reduces the number of necessary experimental steps in completing the required sample preparation and analysis, resulting in reduced uncertainty in the analysis.

  14. Do Practical Standard Coupled Cluster Calculations Agree Better than Kohn–Sham Calculations with Currently Available Functionals When Compared to the Best Available Experimental Data for Dissociation Energies of Bonds to 3d Transition Metals?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Xuefei; Zhang, Wenjing; Tang, Mingsheng

    2015-05-12

    Coupled-cluster (CC) methods have been extensively used as the high-level approach in quantum electronic structure theory to predict various properties of molecules when experimental results are unavailable. It is often assumed that CC methods, if they include at least up to connected-triple-excitation quasiperturbative corrections to a full treatment of single and double excitations (in particular, CCSD(T)), and a very large basis set, are more accurate than Kohn–Sham (KS) density functional theory (DFT). In the present work, we tested and compared the performance of standard CC and KS methods on bond energy calculations of 20 3d transition metal-containing diatomic molecules againstmore » the most reliable experimental data available, as collected in a database called 3dMLBE20. It is found that, although the CCSD(T) and higher levels CC methods have mean unsigned deviations from experiment that are smaller than most exchange-correlation functionals for metal–ligand bond energies of transition metals, the improvement is less than one standard deviation of the mean unsigned deviation. Furthermore, on average, almost half of the 42 exchange-correlation functionals that we tested are closer to experiment than CCSD(T) with the same extended basis set for the same molecule. The results show that, when both relativistic and core–valence correlation effects are considered, even the very high-level (expensive) CC method with single, double, triple, and perturbative quadruple cluster operators, namely, CCSDT(2)Q, averaged over 20 bond energies, gives a mean unsigned deviation (MUD(20) = 4.7 kcal/mol when one correlates only valence, 3p, and 3s electrons of transition metals and only valence electrons of ligands, or 4.6 kcal/mol when one correlates all core electrons except for 1s shells of transition metals, S, and Cl); and that is similar to some good xc functionals (e.g., B97-1 (MUD(20) = 4.5 kcal/mol) and PW6B95 (MUD(20) = 4.9 kcal/mol)) when the same basis set is used. We found that, for both coupled cluster calculations and KS calculations, the T1 diagnostics correlate the errors better than either the M diagnostics or the B1 DFT-based diagnostics. The potential use of practical standard CC methods as a benchmark theory is further confounded by the finding that CC and DFT methods usually have different signs of the error. We conclude that the available experimental data do not provide a justification for using conventional single-reference CC theory calculations to validate or test xc functionals for systems involving 3d transition metals.« less

  15. Experimental formation enthalpies for intermetallic phases and other inorganic compounds

    PubMed Central

    Kim, George; Meschel, S. V.; Nash, Philip; Chen, Wei

    2017-01-01

    The standard enthalpy of formation of a compound is the energy associated with the reaction to form the compound from its component elements. The standard enthalpy of formation is a fundamental thermodynamic property that determines its phase stability, which can be coupled with other thermodynamic data to calculate phase diagrams. Calorimetry provides the only direct method by which the standard enthalpy of formation is experimentally measured. However, the measurement is often a time and energy intensive process. We present a dataset of enthalpies of formation measured by high-temperature calorimetry. The phases measured in this dataset include intermetallic compounds with transition metal and rare-earth elements, metal borides, metal carbides, and metallic silicides. These measurements were collected from over 50 years of calorimetric experiments. The dataset contains 1,276 entries on experimental enthalpy of formation values and structural information. Most of the entries are for binary compounds but ternary and quaternary compounds are being added as they become available. The dataset also contains predictions of enthalpy of formation from first-principles calculations for comparison. PMID:29064466

  16. Mode-Stirred Method Implementation for HIRF Susceptibility Testing and Results Comparison with Anechoic Method

    NASA Technical Reports Server (NTRS)

    Nguyen, Truong X.; Ely, Jay J.; Koppen, Sandra V.

    2001-01-01

    This paper describes the implementation of mode-stirred method for susceptibility testing according to the current DO-160D standard. Test results on an Engine Data Processor using the implemented procedure and the comparisons with the standard anechoic test results are presented. The comparison experimentally shows that the susceptibility thresholds found in mode-stirred method are consistently higher than anechoic. This is consistent with the recent statistical analysis finding by NIST that the current calibration procedure overstates field strength by a fixed amount. Once the test results are adjusted for this value, the comparisons with the anechoic results are excellent. The results also show that test method has excellent chamber to chamber repeatability. Several areas for improvements to the current procedure are also identified and implemented.

  17. Topics in atomic hydrogen standard research and applications

    NASA Technical Reports Server (NTRS)

    Peters, H. E.

    1971-01-01

    Hydrogen maser based frequency and time standards have been in continuous use at NASA tracking stations since February 1970, while laboratory work at Goddard has continued in the further development and improvement of hydrogen masers. Concurrently, experimental work has been in progress with a new frequency standard based upon the hydrogen atom using the molecular beam magnetic resonance method. Much of the hydrogen maser technology is directly applicable to the new hydrogen beam standard, and calculations based upon realistic data indicate that the accuracy potential of the hydrogen atomic beam exceeds that of either the cesium beam tube or the hydrogen maser, possibly by several orders of magnitude. In addition, with successful development, the hydrogen beam standard will have several other performance advantages over other devices, particularly exceptional stability and long continuous operating life. Experimental work with a new laboratory hydrogen beam device has recently resulted in the first resonance transition curves, measurements of relative state populations, beam intensities, etc. The most important aspects of both the hydrogen maser and the hydrogen beam work are covered.

  18. Application of an improved minimum entropy deconvolution method for railway rolling element bearing fault diagnosis

    NASA Astrophysics Data System (ADS)

    Cheng, Yao; Zhou, Ning; Zhang, Weihua; Wang, Zhiwei

    2018-07-01

    Minimum entropy deconvolution is a widely-used tool in machinery fault diagnosis, because it enhances the impulse component of the signal. The filter coefficients that greatly influence the performance of the minimum entropy deconvolution are calculated by an iterative procedure. This paper proposes an improved deconvolution method for the fault detection of rolling element bearings. The proposed method solves the filter coefficients by the standard particle swarm optimization algorithm, assisted by a generalized spherical coordinate transformation. When optimizing the filters performance for enhancing the impulses in fault diagnosis (namely, faulty rolling element bearings), the proposed method outperformed the classical minimum entropy deconvolution method. The proposed method was validated in simulation and experimental signals from railway bearings. In both simulation and experimental studies, the proposed method delivered better deconvolution performance than the classical minimum entropy deconvolution method, especially in the case of low signal-to-noise ratio.

  19. AGARD standard aeroelastic configurations for dynamic response. 1: Wing 445.6

    NASA Technical Reports Server (NTRS)

    Yates, E. Carson, Jr.

    1988-01-01

    This report contains experimental flutter data for the AGARD 3-D swept tapered standard configuration Wing 445.6, along with related descriptive data of the model properties required for comparative flutter calculations. As part of a cooperative AGARD-SMP program, guided by the Sub-Committee on Aeroelasticity, this standard configuration may serve as a common basis for comparison of calculated and measured aeroelastic behavior. These comparisons will promote a better understanding of the assumptions, approximations and limitations underlying the various aerodynamic methods applied, thus pointing the way to further improvements.

  20. Quantitative determination and validation of octreotide acetate using 1 H-NMR spectroscopy with internal standard method.

    PubMed

    Yu, Chen; Zhang, Qian; Xu, Peng-Yao; Bai, Yin; Shen, Wen-Bin; Di, Bin; Su, Meng-Xiang

    2018-01-01

    Quantitative nuclear magnetic resonance (qNMR) is a well-established technique in quantitative analysis. We presented a validated 1 H-qNMR method for assay of octreotide acetate, a kind of cyclic octopeptide. Deuterium oxide was used to remove the undesired exchangeable peaks, which was referred to as proton exchange, in order to make the quantitative signals isolated in the crowded spectrum of the peptide and ensure precise quantitative analysis. Gemcitabine hydrochloride was chosen as the suitable internal standard. Experimental conditions, including relaxation delay time, the numbers of scans, and pulse angle, were optimized first. Then method validation was carried out in terms of selectivity, stability, linearity, precision, and robustness. The assay result was compared with that by means of high performance liquid chromatography, which is provided by Chinese Pharmacopoeia. The statistical F test, Student's t test, and nonparametric test at 95% confidence level indicate that there was no significant difference between these two methods. qNMR is a simple and accurate quantitative tool with no need for specific corresponding reference standards. It has the potential of the quantitative analysis of other peptide drugs and standardization of the corresponding reference standards. Copyright © 2017 John Wiley & Sons, Ltd.

  1. Mitigation methods for temporary concrete traffic barrier effects on flood water flows.

    DOT National Transportation Integrated Search

    2011-07-01

    A combined experimental and analytical approach was put together to evaluate the hydraulic performance and : stability of TxDOT standard and modified temporary concrete traffic barriers (TCTBs) in extreme flood. : Rating curves are developed for diff...

  2. Singer product apertures-A coded aperture system with a fast decoding algorithm

    NASA Astrophysics Data System (ADS)

    Byard, Kevin; Shutler, Paul M. E.

    2017-06-01

    A new type of coded aperture configuration that enables fast decoding of the coded aperture shadowgram data is presented. Based on the products of incidence vectors generated from the Singer difference sets, we call these Singer product apertures. For a range of aperture dimensions, we compare experimentally the performance of three decoding methods: standard decoding, induction decoding and direct vector decoding. In all cases the induction and direct vector methods are several orders of magnitude faster than the standard method, with direct vector decoding being significantly faster than induction decoding. For apertures of the same dimensions the increase in speed offered by direct vector decoding over induction decoding is better for lower throughput apertures.

  3. The same analysis approach: Practical protection against the pitfalls of novel neuroimaging analysis methods.

    PubMed

    Görgen, Kai; Hebart, Martin N; Allefeld, Carsten; Haynes, John-Dylan

    2017-12-27

    Standard neuroimaging data analysis based on traditional principles of experimental design, modelling, and statistical inference is increasingly complemented by novel analysis methods, driven e.g. by machine learning methods. While these novel approaches provide new insights into neuroimaging data, they often have unexpected properties, generating a growing literature on possible pitfalls. We propose to meet this challenge by adopting a habit of systematic testing of experimental design, analysis procedures, and statistical inference. Specifically, we suggest to apply the analysis method used for experimental data also to aspects of the experimental design, simulated confounds, simulated null data, and control data. We stress the importance of keeping the analysis method the same in main and test analyses, because only this way possible confounds and unexpected properties can be reliably detected and avoided. We describe and discuss this Same Analysis Approach in detail, and demonstrate it in two worked examples using multivariate decoding. With these examples, we reveal two sources of error: A mismatch between counterbalancing (crossover designs) and cross-validation which leads to systematic below-chance accuracies, and linear decoding of a nonlinear effect, a difference in variance. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Temporal downscaling of crop coefficients for winter wheat in the North China Plain: A case study at the Gucheng ecological-meteorological experimental station

    USDA-ARS?s Scientific Manuscript database

    The crop coefficient (Kc) method is widely used for operational estimation of actual evapotranspiration (ETa) and crop water requirements. The standard method for obtaining Kc is via a lookup table from FAO-56 (Food and Agriculture Organization of the United Nations Irrigation and Drainage Paper No....

  5. Fixed-pattern noise correction method based on improved moment matching for a TDI CMOS image sensor.

    PubMed

    Xu, Jiangtao; Nie, Huafeng; Nie, Kaiming; Jin, Weimin

    2017-09-01

    In this paper, an improved moment matching method based on a spatial correlation filter (SCF) and bilateral filter (BF) is proposed to correct the fixed-pattern noise (FPN) of a time-delay-integration CMOS image sensor (TDI-CIS). First, the values of row FPN (RFPN) and column FPN (CFPN) are estimated and added to the original image through SCF and BF, respectively. Then the filtered image will be processed by an improved moment matching method with a moving window. Experimental results based on a 128-stage TDI-CIS show that, after correcting the FPN in the image captured under uniform illumination, the standard deviation of row mean vector (SDRMV) decreases from 5.6761 LSB to 0.1948 LSB, while the standard deviation of the column mean vector (SDCMV) decreases from 15.2005 LSB to 13.1949LSB. In addition, for different images captured by different TDI-CISs, the average decrease of SDRMV and SDCMV is 5.4922/2.0357 LSB, respectively. Comparative experimental results indicate that the proposed method can effectively correct the FPNs of different TDI-CISs while maintaining image details without any auxiliary equipment.

  6. Electrochemistry of moexipril: experimental and computational approach and voltammetric determination.

    PubMed

    Taşdemir, Hüdai I; Kiliç, E

    2014-09-01

    The electrochemistry of moexipril (MOE) was studied by electrochemical methods with theoretical calculations performed at B3LYP/6-31 + G (d)//AM1. Cyclic voltammetric studies were carried out based on a reversible and adsorption-controlled reduction peak at -1.35 V on a hanging mercury drop electrode (HMDE). Concurrently irreversible diffusion-controlled oxidation peak at 1.15 V on glassy carbon electrode (GCE) was also employed. Potential values are according to Ag/AgCI, (3.0 M KCI) and measurements were performed in Britton-Robinson buffer of pH 5.5. Tentative electrode mechanisms were proposed according to experimental results and ab-initio calculations. Square-wave adsorptive stripping voltammetric methods have been developed and validated for quantification of MOE in pharmaceutical preparations. Linear working range was established as 0.03-1.35 microM for HMDE and 0.2-20.0 microM for GCE. Limit of quantification (LOQ) was calculated to be 0.032 and 0.47 microM for HMDE and GCE, respectively. Methods were successfully applied to assay the drug in tablets by calibration and standard addition methods with good recoveries between 97.1% and 106.2% having relative standard deviation less than 10%.

  7. Comparative analysis of international standards for the fatigue testing of posterior spinal fixation systems.

    PubMed

    Villa, Tomaso; La Barbera, Luigi; Galbusera, Fabio

    2014-04-01

    Preclinical evaluation of the long-term reliability of devices for lumbar fixation is a mandatory activity before they are put into market. The experimental setups are described in two different standards edited by the International Organization for Standardization (ISO) and the American Society for Testing Materials (ASTM), but the evaluation of the suitability of such tests to simulate the actual loading with in vivo situations has never been performed. To calculate through finite element (FE) simulations the stress in the rods of the fixator when subjected to ASTM and ISO standards. To compare the calculated stresses arising in the same fixator once it has been virtually mounted in a physiological environment and loaded with physiological forces and moments. FE simulations and validation experimental tests. FE models of the ISO and ASTM setups were created to conduct simulations of the tests prescribed by standards and calculate stresses in the rods. Validation of the simulations were performed through experimental tests; the same fixator was virtually mounted in an L2-L4 FE model of the lumbar spine and stresses in the rods were calculated when the spine was subjected to physiological forces and moments. The comparison between FE simulations and experimental tests showed good agreement between results obtained using the two methodologies, thus confirming the suitability of the FE method to evaluate stresses in the device in different loading situations. The usage of a physiological load with ASTM standard is impossible due to the extreme severity of the ASTM configuration; in this circumstance, the presence of an anterior support is suggested. Also, ISO prescriptions, although the choice of the setup correctly simulates the mechanical contribution of the discs, seem to overstress the device as compared with a physiological loading condition. Some daily activities, other than walking, can induce a further state of stress in the device that should be taken into account in setting up new experimental procedures. ISO standard loading prescriptions seems to be more severe than the expected physiological ones. The ASTM standard should be completed by including some anterior supporting device and declaring the value of the load to be imposed. Moreover, a further enhancement of standards would be simulating other movements representative of daily activities different from walking. Copyright © 2014 Elsevier Inc. All rights reserved.

  8. Mutation-Based Learning to Improve Student Autonomy and Scientific Inquiry Skills in a Large Genetics Laboratory Course

    PubMed Central

    Wu, Jinlu

    2013-01-01

    Laboratory education can play a vital role in developing a learner's autonomy and scientific inquiry skills. In an innovative, mutation-based learning (MBL) approach, students were instructed to redesign a teacher-designed standard experimental protocol by a “mutation” method in a molecular genetics laboratory course. Students could choose to delete, add, reverse, or replace certain steps of the standard protocol to explore questions of interest to them in a given experimental scenario. They wrote experimental proposals to address their rationales and hypotheses for the “mutations”; conducted experiments in parallel, according to both standard and mutated protocols; and then compared and analyzed results to write individual lab reports. Various autonomy-supportive measures were provided in the entire experimental process. Analyses of student work and feedback suggest that students using the MBL approach 1) spend more time discussing experiments, 2) use more scientific inquiry skills, and 3) find the increased autonomy afforded by MBL more enjoyable than do students following regimented instructions in a conventional “cookbook”-style laboratory. Furthermore, the MBL approach does not incur an obvious increase in labor and financial costs, which makes it feasible for easy adaptation and implementation in a large class. PMID:24006394

  9. Failure of Standard Training Sets in the Analysis of Fast-Scan Cyclic Voltammetry Data.

    PubMed

    Johnson, Justin A; Rodeberg, Nathan T; Wightman, R Mark

    2016-03-16

    The use of principal component regression, a multivariate calibration method, in the analysis of in vivo fast-scan cyclic voltammetry data allows for separation of overlapping signal contributions, permitting evaluation of the temporal dynamics of multiple neurotransmitters simultaneously. To accomplish this, the technique relies on information about current-concentration relationships across the scan-potential window gained from analysis of training sets. The ability of the constructed models to resolve analytes depends critically on the quality of these data. Recently, the use of standard training sets obtained under conditions other than those of the experimental data collection (e.g., with different electrodes, animals, or equipment) has been reported. This study evaluates the analyte resolution capabilities of models constructed using this approach from both a theoretical and experimental viewpoint. A detailed discussion of the theory of principal component regression is provided to inform this discussion. The findings demonstrate that the use of standard training sets leads to misassignment of the current-concentration relationships across the scan-potential window. This directly results in poor analyte resolution and, consequently, inaccurate quantitation, which may lead to erroneous conclusions being drawn from experimental data. Thus, it is strongly advocated that training sets be obtained under the experimental conditions to allow for accurate data analysis.

  10. Antimicrobial Efficiency of Iodinated Individual Protection Filters

    DTIC Science & Technology

    2004-11-01

    additional 2 logs of attenuation vs. a standard COTS canister when challenged with MS2 coliphage . U U U UU 9 Joseph D. Wander 850-283-6240 NOTICES USING...versus a standard COTS canister when challenged with MS2 coliphage . INTRODUCTION Biological weapons are not new, and have been used as warfare...canisters and the iodinated clip-on prototypes were challenged with aerosolized MS2 coliphage . EXPERIMENTAL METHODS Escherichia coli (ATCC 15597) was

  11. Biodegradability standards for carrier bags and plastic films in aquatic environments: a critical review.

    PubMed

    Harrison, Jesse P; Boardman, Carl; O'Callaghan, Kenneth; Delort, Anne-Marie; Song, Jim

    2018-05-01

    Plastic litter is encountered in aquatic ecosystems across the globe, including polar environments and the deep sea. To mitigate the adverse societal and ecological impacts of this waste, there has been debate on whether 'biodegradable' materials should be granted exemptions from plastic bag bans and levies. However, great care must be exercised when attempting to define this term, due to the broad and complex range of physical and chemical conditions encountered within natural ecosystems. Here, we review existing international industry standards and regional test methods for evaluating the biodegradability of plastics within aquatic environments (wastewater, unmanaged freshwater and marine habitats). We argue that current standards and test methods are insufficient in their ability to realistically predict the biodegradability of carrier bags in these environments, due to several shortcomings in experimental procedures and a paucity of information in the scientific literature. Moreover, existing biodegradability standards and test methods for aquatic environments do not involve toxicity testing or account for the potentially adverse ecological impacts of carrier bags, plastic additives, polymer degradation products or small (microscopic) plastic particles that can arise via fragmentation. Successfully addressing these knowledge gaps is a key requirement for developing new biodegradability standard(s) for lightweight carrier bags.

  12. An accurate estimation method of kinematic viscosity for standard viscosity liquids

    NASA Astrophysics Data System (ADS)

    Kurano, Y.; Kobayashi, H.; Yoshida, K.; Imai, H.

    1992-07-01

    Deming's method of least squares is introduced to make an accurate kinematic viscosity estimation for a series of 13 standard-viscosity liquids at any desired temperature. The empirical ASTM kinematic viscosity-temperature equation is represented in the form loglog( v+c)=a-b log T, where v (in mm2. s-1) is the kinematic viscosity at temperature T (in K), a and b are the constants for a given liquid, and c has a variable value. In the present application, however, c is assumed to have a constant value for each standard-viscosity liquid, as do a and b in the ASTM equation. This assumption has since been verified experimentally for all standard-viscosity liquids. The kinematic viscosities for the 13 standard-viscosity liquids have been measured with a high accuracy in the temperature range of 20 40°C using a series of the NRLM capillary master viscometers with an automatic flow time detection system. The deviations between measured and estimated kinematic viscosities were less than ±0.04% for the 10 standard-viscosity liquids JS2.5 to JS2000 and ±0.11% for the 3 standard-viscosity liquids JS15H to JS200H, respectively. From the above investigation, it was revealed that the uncertainty in the present estimation method is less than one-third that in the usual ASTM method.

  13. Parameter recovery, bias and standard errors in the linear ballistic accumulator model.

    PubMed

    Visser, Ingmar; Poessé, Rens

    2017-05-01

    The linear ballistic accumulator (LBA) model (Brown & Heathcote, , Cogn. Psychol., 57, 153) is increasingly popular in modelling response times from experimental data. An R package, glba, has been developed to fit the LBA model using maximum likelihood estimation which is validated by means of a parameter recovery study. At sufficient sample sizes parameter recovery is good, whereas at smaller sample sizes there can be large bias in parameters. In a second simulation study, two methods for computing parameter standard errors are compared. The Hessian-based method is found to be adequate and is (much) faster than the alternative bootstrap method. The use of parameter standard errors in model selection and inference is illustrated in an example using data from an implicit learning experiment (Visser et al., , Mem. Cogn., 35, 1502). It is shown that typical implicit learning effects are captured by different parameters of the LBA model. © 2017 The British Psychological Society.

  14. Optimization Method of a Low Cost, High Performance Ceramic Proppant by Orthogonal Experimental Design

    NASA Astrophysics Data System (ADS)

    Zhou, Y.; Tian, Y. M.; Wang, K. Y.; Li, G.; Zou, X. W.; Chai, Y. S.

    2017-09-01

    This study focused on optimization method of a ceramic proppant material with both low cost and high performance that met the requirements of Chinese Petroleum and Gas Industry Standard (SY/T 5108-2006). The orthogonal experimental design of L9(34) was employed to study the significance sequence of three factors, including weight ratio of white clay to bauxite, dolomite content and sintering temperature. For the crush resistance, both the range analysis and variance analysis reflected the optimally experimental condition was weight ratio of white clay to bauxite=3/7, dolomite content=3 wt.%, temperature=1350°C. For the bulk density, the most important factor was the sintering temperature, followed by the dolomite content, and then the ratio of white clay to bauxite.

  15. Circular Samples as Objects for Magnetic Resonance Imaging - Mathematical Simulation, Experimental Results

    NASA Astrophysics Data System (ADS)

    Frollo, Ivan; Krafčík, Andrej; Andris, Peter; Přibil, Jiří; Dermek, Tomáš

    2015-12-01

    Circular samples are the frequent objects of "in-vitro" investigation using imaging method based on magnetic resonance principles. The goal of our investigation is imaging of thin planar layers without using the slide selection procedure, thus only 2D imaging or imaging of selected layers of samples in circular vessels, eppendorf tubes,.. compulsorily using procedure "slide selection". In spite of that the standard imaging methods was used, some specificity arise when mathematical modeling of these procedure is introduced. In the paper several mathematical models were presented that were compared with real experimental results. Circular magnetic samples were placed into the homogenous magnetic field of a low field imager based on nuclear magnetic resonance. For experimental verification an MRI 0.178 Tesla ESAOTE Opera imager was used.

  16. Efficient spot size converter for higher-order mode fiber-chip coupling.

    PubMed

    Lai, Yaxiao; Yu, Yu; Fu, Songnian; Xu, Jing; Shum, Perry Ping; Zhang, Xinliang

    2017-09-15

    We propose and demonstrate a silicon-based spot size converter (SSC), composed of two identical tapered channel waveguides and a Y-junction. The SSC is designed for first-order mode fiber-to-chip coupling on the basis of mode petal separation and the recombination method. Compared with a traditional on-chip SSC, this method is superior with reduced coupling loss when dealing with a higher-order mode. To the best of our knowledge, we present the first experimental observations of a higher-order SSC which is fully compatible with a standard fabrication process. Average coupling losses of 3 and 5.5 dB are predicted by simulation and demonstrated experimentally. A fully covered 3 dB bandwidth over a 1515-1585 nm wavelength range is experimentally observed.

  17. Detection of Salmonella sp in chicken cuts using immunomagnetic separation

    PubMed Central

    de Cássia dos Santos da Conceição, Rita; Moreira, Ângela Nunes; Ramos, Roberta Juliano; Goularte, Fabiana Lemos; Carvalhal, José Beiro; Aleixo, José Antonio Guimarães

    2008-01-01

    The immunomagnetic separation (IMS) is a technique that has been used to increase sensitivity and specificity and to decrease the time required for detection of Salmonella in foods through different methodologies. In this work we report on the development of a method for detection of Salmonella in chicken cuts using in house antibody-sensitized microspheres associated to conventional plating in selective agar (IMS-plating). First, protein A-coated microspheres were sensitized with polyclonal antibodies against lipopolysacharide and flagella from salmonellae and used to standardize a procedure for capturing Salmonella Enteritidis from pure cultures and detection in selective agar. Subsequently, samples of chicken meat experimentally contaminated with S. Enteritidis were analyzed immediately after contamination and after 24h of refrigeration using three enrichment protocols. The detection limit of the IMS-plating procedure after standardization with pure culture was about 2x10 CFU/mL. The protocol using non-selective enrichment for 6-8h, selective enrichment for 16-18h and a post-enrichment for 4h gave the best results of S. Enteritidis detection by IMS-plating in experimentally contaminated meat. IMS-plating using this protocol was compared to the standard culture method for salmonellae detection in naturally contaminated chicken cuts and yielded 100% sensitivity and 94% specificity. The method developed using in house prepared magnetic microespheres for IMS and plating in selective agar was able to diminish by at least one day the time required for detection of Salmonella in chicken products by the conventional culture method. PMID:24031199

  18. Semi-experimental equilibrium structure of pyrazinamide from gas-phase electron diffraction. How much experimental is it?

    NASA Astrophysics Data System (ADS)

    Tikhonov, Denis S.; Vishnevskiy, Yury V.; Rykov, Anatolii N.; Grikina, Olga E.; Khaikin, Leonid S.

    2017-03-01

    A semi-experimental equilibrium structure of free molecules of pyrazinamide has been determined for the first time using gas electron diffraction method. The refinement was carried using regularization of geometry by calculated quantum chemical parameters. It is discussed to which extent is the final structure experimental. A numerical approach for estimation of the amount of experimental information in the refined parameters is suggested. The following values of selected internuclear distances were determined (values are in Å with 1σ in the parentheses): re(Cpyrazine-Cpyrazine)av = 1.397(2), re(Npyrazine-Cpyrazine)av = 1.332(3), re(Cpyrazine-Camide) = 1.493(1), re(Namide-Camide) = 1.335(2), re(Oamide-Camide) = 1.219(1). The given standard deviations represent pure experimental uncertainties without the influence of regularization.

  19. A combined experimental and computational thermodynamic study of difluoronitrobenzene isomers.

    PubMed

    Ribeiro da Silva, Manuel A V; Monte, Manuel J S; Lobo Ferreira, Ana I M C; Oliveira, Juliana A S A; Cimas, Álvaro

    2010-10-14

    This work reports the experimental and computational thermochemical study performed on three difluorinated nitrobenzene isomers: 2,4-difluoronitrobenzene (2,4-DFNB), 2,5-difluoronitrobenzene (2,5-DFNB), and 3,4-difluoronitrobenzene (3,4-DFNB). The standard (p° = 0.1 MPa) molar enthalpies of formation in the liquid phase of these compounds were derived from the standard molar energies of combustion, in oxygen, at T = 298.15 K, measured by rotating bomb combustion calorimetry. A static method was used to perform the vapor pressure study of the referred compounds allowing the construction of the phase diagrams and determination of the respective triple point coordinates, as well as the standard molar enthalpies of vaporization, sublimation, and fusion for two of the isomers (2,4-DFNB and 3,4-DFNB). For 2,5-difluoronitrobenzene, only liquid vapor pressures were measured enabling the determination of the standard molar enthalpies of vaporization. Combining the thermodynamic parameters of the compounds studied, the following standard (p° = 0.1 MPa) molar enthalpies of formation in the gaseous phase, at T = 298.15 K, were derived: Δ(f)H(m)° (2,4-DFNB, g) = -(296.3 ± 1.8) kJ · mol⁻¹, Δ(f)H(m)° (2,5-DFNB, g) = -(288.2 ± 2.1) kJ · mol⁻¹, and Δ(f)H(m)° (3,4-DFNB, g) = -(302.4 ± 2.1) kJ · mol⁻¹. Using the empirical scheme developed by Cox, several approaches were evaluated in order to identify the best method for estimating the standard molar gas phase enthalpies of formation of these compounds. The estimated values were compared to the ones obtained experimentally, and the approach providing the best comparison with experiment was used to estimate the thermodynamic behavior of the other difluorinated nitrobenzene isomers not included in this study. Additionally, the enthalpies of formation of these compounds along with the enthalpies of formation of the other isomers not studied experimentally, i.e., 2,3-DFNB, 2,6-DFNB, and 3,5-DFNB, were estimated using the composite G3MP2B3 approach together with adequate gas-phase working reactions. Furthermore, we also used this computational approach to calculate the gas-phase basicities, proton and electron affinities, and, finally, adiabatic ionization enthalpies.

  20. Statistical considerations for grain-size analyses of tills

    USGS Publications Warehouse

    Jacobs, A.M.

    1971-01-01

    Relative percentages of sand, silt, and clay from samples of the same till unit are not identical because of different lithologies in the source areas, sorting in transport, random variation, and experimental error. Random variation and experimental error can be isolated from the other two as follows. For each particle-size class of each till unit, a standard population is determined by using a normally distributed, representative group of data. New measurements are compared with the standard population and, if they compare satisfactorily, the experimental error is not significant and random variation is within the expected range for the population. The outcome of the comparison depends on numerical criteria derived from a graphical method rather than on a more commonly used one-way analysis of variance with two treatments. If the number of samples and the standard deviation of the standard population are substituted in a t-test equation, a family of hyperbolas is generated, each of which corresponds to a specific number of subsamples taken from each new sample. The axes of the graphs of the hyperbolas are the standard deviation of new measurements (horizontal axis) and the difference between the means of the new measurements and the standard population (vertical axis). The area between the two branches of each hyperbola corresponds to a satisfactory comparison between the new measurements and the standard population. Measurements from a new sample can be tested by plotting their standard deviation vs. difference in means on axes containing a hyperbola corresponding to the specific number of subsamples used. If the point lies between the branches of the hyperbola, the measurements are considered reliable. But if the point lies outside this region, the measurements are repeated. Because the critical segment of the hyperbola is approximately a straight line parallel to the horizontal axis, the test is simplified to a comparison between the means of the standard population and the means of the subsample. The minimum number of subsamples required to prove significant variation between samples caused by different lithologies in the source areas and sorting in transport can be determined directly from the graphical method. The minimum number of subsamples required is the maximum number to be run for economy of effort. ?? 1971 Plenum Publishing Corporation.

  1. In situ Biofilm Quantification in Bioelectrochemical Systems by using Optical Coherence Tomography.

    PubMed

    Molenaar, Sam D; Sleutels, Tom; Pereira, Joao; Iorio, Matteo; Borsje, Casper; Zamudio, Julian A; Fabregat-Santiago, Francisco; Buisman, Cees J N; Ter Heijne, Annemiek

    2018-04-25

    Detailed studies of microbial growth in bioelectrochemical systems (BESs) are required for their suitable design and operation. Here, we report the use of optical coherence tomography (OCT) as a tool for in situ and noninvasive quantification of biofilm growth on electrodes (bioanodes). An experimental platform is designed and described in which transparent electrodes are used to allow real-time, 3D biofilm imaging. The accuracy and precision of the developed method is assessed by relating the OCT results to well-established standards for biofilm quantification (chemical oxygen demand (COD) and total N content) and show high correspondence to these standards. Biofilm thickness observed by OCT ranged between 3 and 90 μm for experimental durations ranging from 1 to 24 days. This translated to growth yields between 38 and 42 mgCODbiomass  gCODacetate -1 at an anode potential of -0.35 V versus Ag/AgCl. Time-lapse observations of an experimental run performed in duplicate show high reproducibility in obtained microbial growth yield by the developed method. As such, we identify OCT as a powerful tool for conducting in-depth characterizations of microbial growth dynamics in BESs. Additionally, the presented platform allows concomitant application of this method with various optical and electrochemical techniques. © 2018 The Authors. Published by Wiley-VCH Verlag GmbH & Co. KGaA.

  2. A Practical Method for Identifying Significant Change Scores

    ERIC Educational Resources Information Center

    Cascio, Wayne F.; Kurtines, William M.

    1977-01-01

    A test of significance for identifying individuals who are most influenced by an experimental treatment as measured by pre-post test change score is presented. The technique requires true difference scores, the reliability of obtained differences, and their standard error of measurement. (Author/JKS)

  3. Development of a standardized and safe airborne antibacterial assay, and its evaluation on antibacterial biomimetic model surfaces.

    PubMed

    Al-Ahmad, Ali; Zou, Peng; Solarte, Diana Lorena Guevara; Hellwig, Elmar; Steinberg, Thorsten; Lienkamp, Karen

    2014-01-01

    Bacterial infection of biomaterials is a major concern in medicine, and different kinds of antimicrobial biomaterial have been developed to deal with this problem. To test the antimicrobial performance of these biomaterials, the airborne bacterial assay is used, which involves the formation of biohazardous bacterial aerosols. We here describe a new experimental set-up which allows safe handling of such pathogenic aerosols, and standardizes critical parameters of this otherwise intractable and strongly user-dependent assay. With this new method, reproducible, thorough antimicrobial data (number of colony forming units and live-dead-stain) was obtained. Poly(oxonorbornene)-based Synthetic Mimics of Antimicrobial Peptides (SMAMPs) were used as antimicrobial test samples. The assay was able to differentiate even between subtle sample differences, such as different sample thicknesses. With this new set-up, the airborne bacterial assay was thus established as a useful, reliable, and realistic experimental method to simulate the contamination of biomaterials with bacteria, for example in an intraoperative setting.

  4. Structural characterization and numerical simulations of flow properties of standard and reservoir carbonate rocks using micro-tomography

    NASA Astrophysics Data System (ADS)

    Islam, Amina; Chevalier, Sylvie; Sassi, Mohamed

    2018-04-01

    With advances in imaging techniques and computational power, Digital Rock Physics (DRP) is becoming an increasingly popular tool to characterize reservoir samples and determine their internal structure and flow properties. In this work, we present the details for imaging, segmentation, as well as numerical simulation of single-phase flow through a standard homogenous Silurian dolomite core plug sample as well as a heterogeneous sample from a carbonate reservoir. We develop a procedure that integrates experimental results into the segmentation step to calibrate the porosity. We also look into using two different numerical tools for the simulation; namely Avizo Fire Xlab Hydro that solves the Stokes' equations via the finite volume method and Palabos that solves the same equations using the Lattice Boltzmann Method. Representative Elementary Volume (REV) and isotropy studies are conducted on the two samples and we show how DRP can be a useful tool to characterize rock properties that are time consuming and costly to obtain experimentally.

  5. An experimental method to simulate incipient decay of wood basidiomycete fungi

    Treesearch

    Simon Curling; Jerrold E. Winandy; Carol A. Clausen

    2000-01-01

    At very early stages of decay of wood by basidiomycete fungi, strength loss can be measured from wood before any measurable weight loss. Therefore, strength loss is a more efficient measure of incipient decay than weight loss. However, common standard decay tests (e.g. EN 113 or ASTM D2017) use weight loss as the measure of decay. A method was developed that allowed...

  6. Investigation of interpolation techniques for the reconstruction of the first dimension of comprehensive two-dimensional liquid chromatography-diode array detector data.

    PubMed

    Allen, Robert C; Rutan, Sarah C

    2011-10-31

    Simulated and experimental data were used to measure the effectiveness of common interpolation techniques during chromatographic alignment of comprehensive two-dimensional liquid chromatography-diode array detector (LC×LC-DAD) data. Interpolation was used to generate a sufficient number of data points in the sampled first chromatographic dimension to allow for alignment of retention times from different injections. Five different interpolation methods, linear interpolation followed by cross correlation, piecewise cubic Hermite interpolating polynomial, cubic spline, Fourier zero-filling, and Gaussian fitting, were investigated. The fully aligned chromatograms, in both the first and second chromatographic dimensions, were analyzed by parallel factor analysis to determine the relative area for each peak in each injection. A calibration curve was generated for the simulated data set. The standard error of prediction and percent relative standard deviation were calculated for the simulated peak for each technique. The Gaussian fitting interpolation technique resulted in the lowest standard error of prediction and average relative standard deviation for the simulated data. However, upon applying the interpolation techniques to the experimental data, most of the interpolation methods were not found to produce statistically different relative peak areas from each other. While most of the techniques were not statistically different, the performance was improved relative to the PARAFAC results obtained when analyzing the unaligned data. Copyright © 2011 Elsevier B.V. All rights reserved.

  7. An experimental investigation devoted to determine heat transfer characteristics in a radiant ceiling heating system

    NASA Astrophysics Data System (ADS)

    Koca, Aliihsan; Acikgoz, Ozgen; Çebi, Alican; Çetin, Gürsel; Dalkilic, Ahmet Selim; Wongwises, Somchai

    2018-02-01

    Investigations on heated ceiling method can be considered as a new research area in comparison to the common wall heating-cooling and cooled ceiling methods. In this work, heat transfer characteristics of a heated radiant ceiling system was investigated experimentally. There were different configurations for a single room design in order to determine the convective and radiative heat transfer rates. Almost all details on the arrangement of the test chamber, hydraulic circuit and radiant panels, the measurement equipment and experimental method including uncertainty analysis were revealed in detail indicating specific international standards. Total heat transfer amount from the panels were calculated as the sum of radiation to the unheated surfaces, convection to the air, and conduction heat loss from the backside of the panels. Integral expression of the view factors was calculated by means of the numerical evaluations using Matlab code. By means of this experimental chamber, the radiative, convective and total heat-transfer coefficient values along with the heat flux values provided from the ceiling to the unheated surrounding surfaces have been calculated. Moreover, the details of 28 different experimental case study measurements from the experimental chamber including the convective, radiative and total heat flux, and heat output results are given in a Table for other researchers to validate their theoretical models and empirical correlations.

  8. Improving the Knowledge and Attitude on 'Standard Days Method' of Family Planning Through a Promotional Program Among Indian Postgraduate Students.

    PubMed

    Menachery, Philby Babu; Noronha, Judith Angelitta; Fernanades, Sweety

    2017-08-01

    The 'Standard Days Method' is a fertility awareness-based method of family planning that identifies day 8 through day 19 of the menstrual cycle as fertile days during which a woman is likely to conceive with unprotected intercourse. The study was aimed to determine the effectiveness of a promotional program on the 'Standard Days Method' in terms of improving the knowledge scores and attitude scores. A pre-experimental one-group pretest-posttest research design was adopted. The samples included 365 female postgraduate students from selected colleges of Udupi Taluk, Karnataka. The data was collected using self-administered questionnaires. The plan for the promotional program was also established. The findings of the study were analyzed using the descriptive and inferential statistics. The mean pretest and posttest knowledge scores were computed, and it was found that there was an increase in the mean knowledge score from 8.96 ± 3.84 to 32.64 ± 5.59, respectively. It was observed that the promotional program on 'Standard Days Method' was effective in improving the knowledge ( p  < 0.001) and attitude ( p  < 0.001) of the postgraduate students. The promotional program on Standard Days Method of family planning was effective in improving the knowledge and attitude of the postgraduate female students. This will enable the women to adopt this method and plan their pregnancies naturally and reduce the side effects of using oral contraceptives.

  9. Development of a standardized sequential extraction protocol for simultaneous extraction of multiple actinide elements

    DOE PAGES

    Faye, Sherry A.; Richards, Jason M.; Gallardo, Athena M.; ...

    2017-02-07

    Sequential extraction is a useful technique for assessing the potential to leach actinides from soils; however, current literature lacks uniformity in experimental details, making direct comparison of results impossible. This work continued development toward a standardized five-step sequential extraction protocol by analyzing extraction behaviors of 232Th, 238U, 239,240Pu and 241Am from lake and ocean sediment reference materials. Results produced a standardized procedure after creating more defined reaction conditions to improve method repeatability. A NaOH fusion procedure is recommended following sequential leaching for the complete dissolution of insoluble species.

  10. Method development and subsequent survey analysis of biological tissues for platinum, lead, and manganese content.

    PubMed Central

    Yoakum, A M; Stewart, P L; Sterrett, J E

    1975-01-01

    An emission spectrochemical method is described for the determination of trace quantities of platinum, lead, and manganese in biological tissues. Total energy burns in an argon-oxygen atmosphere are employed. Sample preparation, conditions of analysis, and preparation of standards are discussed. The precision of the method is consistently better than +/- 15%, and comparative analyses indicate comparable accuracies. Data obtained for experimental rat tissues and for selected autopsy tissues are presented. PMID:1157798

  11. Triadic Gaze Intervention for Young Children with Physical Disabilities

    PubMed Central

    Olswang, Lesley B.; Dowden, Patricia; Feuerstein, Julie; Greenslade, Kathryn; Pinder, Gay Lloyd; Fleming, Kandace

    2018-01-01

    Purpose This randomized controlled study investigated whether a supplemental treatment designed to teach triadic gaze (TG) as a signal of coordinated joint attention (CJA) would yield a significantly greater increase in TG in the experimental versus control group. Method Eighteen 10- to 24-month-old children with severe motor impairments were randomly assigned to an experimental (n=9) or control group (n=9). For approximately 29 sessions over 17 weeks, experimental participants received TG treatment twice weekly with a speech-language pathologist (SLP) in addition to standard practice. Controls received only standard practice from birth-to-three therapists. Coders masked to group assignment coded TG productions with an unfamiliar SLP at baseline, every three weeks during the experimental phase, and at the final measurement session. Results TG increased across groups from baseline to final measurement, with the experimental group showing slightly greater change. Performance trends were examined using experimental phase moving averages. Comparisons revealed significant differences between groups at two time points (at 12 weeks, r= .30, a medium effect and at the end of the phase r=.50, large effect). Conclusion Results suggest the promise of a short-term, focused treatment to teach TG as a behavioral manifestation of CJA to children with severe physical disabilities. PMID:24686825

  12. Survey of basic medical researchers on the awareness of animal experimental designs and reporting standards in China.

    PubMed

    Ma, Bin; Xu, Jia-Ke; Wu, Wen-Jing; Liu, Hong-Yan; Kou, Cheng-Kun; Liu, Na; Zhao, Lulu

    2017-01-01

    To investigate the awareness and use of the Systematic Review Center for Laboratory Animal Experimentation's (SYRCLE) risk-of-bias tool, the Animal Research: Reporting of In Vivo Experiments (ARRIVE) reporting guidelines, and Gold Standard Publication Checklist (GSPC) in China in basic medical researchers of animal experimental studies. A national questionnaire-based survey targeting basic medical researchers was carried in China to investigate the basic information and awareness of SYRCLE's risk of bias tool, ARRIVE guidelines, GSPC, and animal experimental bias risk control factors. The EpiData3.1 software was used for data entry, and Microsoft Excel 2013 was used for statistical analysis in this study. The number of cases (n) and percentage (%) of classified information were statistically described, and the comparison between groups (i.e., current students vs. research staff) was performed using chi-square test. A total of 298 questionnaires were distributed, and 272 responses were received, which included 266 valid questionnaires (from 118 current students and 148 research staff). Among the 266 survey participants, only 15.8% was aware of the SYRCLE's risk of bias tool, with significant difference between the two groups (P = 0.003), and the awareness rates of ARRIVE guidelines and GSPC were only 9.4% and 9.0%, respectively; 58.6% survey participants believed that the reports of animal experimental studies in Chinese literature were inadequate, with significant difference between the two groups (P = 0.004). In addition, only approximately 1/3 of the survey participants had read systematic reviews and meta-analysis reports of animal experimental studies; only 16/266 (6.0%) had carried out/participated in and 11/266 (4.1%) had published systematic reviews/meta-analysis of animal experimental studies. The awareness and use rates of SYRCLE's risk-of-bias tool, the ARRIVE guidelines, and the GSPC were low among Chinese basic medical researchers. Therefore, specific measures are necessary to promote and popularize these standards and specifications and to introduce these standards into guidelines of Chinese domestic journals as soon as possible to raise awareness and increase use rates of researchers and journal editors, thereby improving the quality of animal experimental methods and reports.

  13. Estimating Durability of Reinforced Concrete

    NASA Astrophysics Data System (ADS)

    Varlamov, A. A.; Shapovalov, E. L.; Gavrilov, V. B.

    2017-11-01

    In this article we propose to use the methods of fracture mechanics to evaluate concrete durability. To evaluate concrete crack resistance characteristics of concrete directly in the structure in order to implement the methods of fracture mechanics, we have developed special methods. Various experimental studies have been carried out to determine the crack resistance characteristics and the concrete modulus of elasticity during its operating. A comparison was carried out for the results obtained with the use of the proposed methods and those obtained with the standard methods for determining the concrete crack resistance characteristics.

  14. Hydroelastic slamming response in the evolution of a flip-through event during shallow-liquid sloshing

    NASA Astrophysics Data System (ADS)

    Lugni, C.; Bardazzi, A.; Faltinsen, O. M.; Graziani, G.

    2014-03-01

    The evolution of a flip-through event [6] upon a vertical, deformable wall during shallow-water sloshing in a 2D tank is analyzed, with specific focus on the role of hydroelasticity. An aluminium plate, whose dimensions are Froude-scaled in order to reproduce the first wet natural frequency associated with the typical structural panel of a Mark III containment system, is used. (Mark III Containment System is a membrane-type tank used in the Liquefied Natural Gas (LNG) carrier to contain the LNG. A typical structural panel is composed by two metallic membranes and two independent thermal insulation layers. The first membrane contains the LNG, the second one ensures redundancy in case of leakage.) Such a system is clamped to a fully rigid vertical wall of the tank at the vertical ends while being kept free on its lateral sides. Hence, in a 2D flow approximation the system can be suitably modelled, as a double-clamped Euler beam, with the Euler beam theory. The hydroelastic effects are assessed by cross-analyzing the experimental data based both on the images recorded by a fast camera, and on the strain measurements along the deformable panel and on the pressure measurements on the rigid wall below the elastic plate. The same experiments are also carried out by substituting the deformable plate with a fully stiff panel. The pressure transducers are mounted at the same positions of the strain gauges used for the deformable plate. The comparison between the results of rigid and elastic case allows to better define the role of hydroelasticity. The analysis has identified three different regimes characterizing the hydroelastic evolution: a quasi-static deformation of the beam (regime I) precedes a strongly hydroelastic behavior (regime II), for which the added mass effects are relevant; finally, the free-vibration phase (regime III) occurs. A hybrid method, combining numerical modelling and experimental data from the tests with fully rigid plate is proposed to examine the hydroelastic effects. Within this approach, the measurements provide the experimental loads acting on the rigid plate, while the numerical solution enables a more detailed analysis, by giving additional information not available from the experimental tests. More in detail, an Euler beam equation is used to model numerically the plate with the added-mass contribution estimated in time. In this way the resulting hybrid method accounts for the variation of the added mass associated with the instantaneous wetted length of the beam, estimated from the experimental images. Moreover, the forcing hydrodynamic load is prescribed by using the experimental pressure distribution measured in the rigid case. The experimental data for the elastic beam are compared with the numerical results of the hybrid model and with those of the standard methods used at the design stage. The comparison against the experimental data shows an overall satisfactory prediction of the hybrid model. The maximum peak pressure predicted by the standard methods agrees with the result of the hybrid model only when the added mass effect is considered. However, the standard methods are not able to properly estimate the temporal evolution of the plate deformation.

  15. The standard centrifuge method accurately measures vulnerability curves of long-vesselled olive stems.

    PubMed

    Hacke, Uwe G; Venturas, Martin D; MacKinnon, Evan D; Jacobsen, Anna L; Sperry, John S; Pratt, R Brandon

    2015-01-01

    The standard centrifuge method has been frequently used to measure vulnerability to xylem cavitation. This method has recently been questioned. It was hypothesized that open vessels lead to exponential vulnerability curves, which were thought to be indicative of measurement artifact. We tested this hypothesis in stems of olive (Olea europea) because its long vessels were recently claimed to produce a centrifuge artifact. We evaluated three predictions that followed from the open vessel artifact hypothesis: shorter stems, with more open vessels, would be more vulnerable than longer stems; standard centrifuge-based curves would be more vulnerable than dehydration-based curves; and open vessels would cause an exponential shape of centrifuge-based curves. Experimental evidence did not support these predictions. Centrifuge curves did not vary when the proportion of open vessels was altered. Centrifuge and dehydration curves were similar. At highly negative xylem pressure, centrifuge-based curves slightly overestimated vulnerability compared to the dehydration curve. This divergence was eliminated by centrifuging each stem only once. The standard centrifuge method produced accurate curves of samples containing open vessels, supporting the validity of this technique and confirming its utility in understanding plant hydraulics. Seven recommendations for avoiding artefacts and standardizing vulnerability curve methodology are provided. © 2014 The Authors. New Phytologist © 2014 New Phytologist Trust.

  16. Standardizing practices: a socio-history of experimental systems in classical genetic and virological cancer research, ca. 1920-1978.

    PubMed

    Fujimura, J H

    1996-01-01

    This paper presents a narrative history of technologies in cancer research circa 1920-1978 and a theoretical perspective on the complex, intertwined relationships between scientific problems, material practices and technologies, concepts and theories, and other historical circumstances. The history presents several active lines of research and technology development in the genetics of cancer in the United States which were constitutive of proto-oncogene work in its current form. I write this history from the perspective of technology development. Scientists participating in cancer research created tools with which to study their problems of interest, but the development of the tools also influenced the questions asked and answered in the form of concepts and theories developed. These tools included genetic ideas of the 1920s, inbred mouse colonies, chemicals and antibiotics developed during World War Two, tissue cultures and their technical procedures, and viruses. I examine these tools as standardized experimental systems that standardized materials as well as practices in laboratories. Inbred animals, tissue culture materials and methods, and tumor viruses as experimental systems gave materiality to "genes' and "cancer'. They are technical-natural objects that stand-in for nature in the laboratory.

  17. Bayesian Treed Calibration: An Application to Carbon Capture With AX Sorbent

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Konomi, Bledar A.; Karagiannis, Georgios; Lai, Kevin

    2017-01-02

    In cases where field or experimental measurements are not available, computer models can model real physical or engineering systems to reproduce their outcomes. They are usually calibrated in light of experimental data to create a better representation of the real system. Statistical methods, based on Gaussian processes, for calibration and prediction have been especially important when the computer models are expensive and experimental data limited. In this paper, we develop the Bayesian treed calibration (BTC) as an extension of standard Gaussian process calibration methods to deal with non-stationarity computer models and/or their discrepancy from the field (or experimental) data. Ourmore » proposed method partitions both the calibration and observable input space, based on a binary tree partitioning, into sub-regions where existing model calibration methods can be applied to connect a computer model with the real system. The estimation of the parameters in the proposed model is carried out using Markov chain Monte Carlo (MCMC) computational techniques. Different strategies have been applied to improve mixing. We illustrate our method in two artificial examples and a real application that concerns the capture of carbon dioxide with AX amine based sorbents. The source code and the examples analyzed in this paper are available as part of the supplementary materials.« less

  18. The effect of instructional methodology on high school students natural sciences standardized tests scores

    NASA Astrophysics Data System (ADS)

    Powell, P. E.

    Educators have recently come to consider inquiry based instruction as a more effective method of instruction than didactic instruction. Experience based learning theory suggests that student performance is linked to teaching method. However, research is limited on inquiry teaching and its effectiveness on preparing students to perform well on standardized tests. The purpose of the study to investigate whether one of these two teaching methodologies was more effective in increasing student performance on standardized science tests. The quasi experimental quantitative study was comprised of two stages. Stage 1 used a survey to identify teaching methods of a convenience sample of 57 teacher participants and determined level of inquiry used in instruction to place participants into instructional groups (the independent variable). Stage 2 used analysis of covariance (ANCOVA) to compare posttest scores on a standardized exam by teaching method. Additional analyses were conducted to examine the differences in science achievement by ethnicity, gender, and socioeconomic status by teaching methodology. Results demonstrated a statistically significant gain in test scores when taught using inquiry based instruction. Subpopulation analyses indicated all groups showed improved mean standardized test scores except African American students. The findings benefit teachers and students by presenting data supporting a method of content delivery that increases teacher efficacy and produces students with a greater cognition of science content that meets the school's mission and goals.

  19. The Infeasibility of Experimental Quantification of Life-Critical Software Reliability

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Finelli, George B.

    1991-01-01

    This paper affirms that quantification of life-critical software reliability is infeasible using statistical methods whether applied to standard software or fault-tolerant software. The key assumption of software fault tolerance|separately programmed versions fail independently|is shown to be problematic. This assumption cannot be justified by experimentation in the ultra-reliability region and subjective arguments in its favor are not sufficiently strong to justify it as an axiom. Also, the implications of the recent multi-version software experiments support this affirmation.

  20. Development and Evaluation of Event-Specific Quantitative PCR Method for Genetically Modified Soybean MON87701.

    PubMed

    Tsukahara, Keita; Takabatake, Reona; Masubuchi, Tomoko; Futo, Satoshi; Minegishi, Yasutaka; Noguchi, Akio; Kondo, Kazunari; Nishimaki-Mogami, Tomoko; Kurashima, Takeyo; Mano, Junichi; Kitta, Kazumi

    2016-01-01

    A real-time PCR-based analytical method was developed for the event-specific quantification of a genetically modified (GM) soybean event, MON87701. First, a standard plasmid for MON87701 quantification was constructed. The conversion factor (C f ) required to calculate the amount of genetically modified organism (GMO) was experimentally determined for a real-time PCR instrument. The determined C f for the real-time PCR instrument was 1.24. For the evaluation of the developed method, a blind test was carried out in an inter-laboratory trial. The trueness and precision were evaluated as the bias and reproducibility of relative standard deviation (RSDr), respectively. The determined biases and the RSDr values were less than 30 and 13%, respectively, at all evaluated concentrations. The limit of quantitation of the method was 0.5%, and the developed method would thus be applicable for practical analyses for the detection and quantification of MON87701.

  1. High-throughput real-time quantitative reverse transcription PCR.

    PubMed

    Bookout, Angie L; Cummins, Carolyn L; Mangelsdorf, David J; Pesola, Jean M; Kramer, Martha F

    2006-02-01

    Extensive detail on the application of the real-time quantitative polymerase chain reaction (QPCR) for the analysis of gene expression is provided in this unit. The protocols are designed for high-throughput, 384-well-format instruments, such as the Applied Biosystems 7900HT, but may be modified to suit any real-time PCR instrument. QPCR primer and probe design and validation are discussed, and three relative quantitation methods are described: the standard curve method, the efficiency-corrected DeltaCt method, and the comparative cycle time, or DeltaDeltaCt method. In addition, a method is provided for absolute quantification of RNA in unknown samples. RNA standards are subjected to RT-PCR in the same manner as the experimental samples, thus accounting for the reaction efficiencies of both procedures. This protocol describes the production and quantitation of synthetic RNA molecules for real-time and non-real-time RT-PCR applications.

  2. Application of micromechanics to the characterization of mortar by ultrasound.

    PubMed

    Hernández, M G; Anaya, J J; Izquierdo, M A G; Ullate, L G

    2002-05-01

    Mechanical properties of concrete and mortar structures can be estimated by ultrasonic non-destructive testing. When the ultrasonic velocity is known, there are standardized methods based on considering the concrete a homogeneous material. Cement composites, however, are heterogeneous and porous, and have a negative effect on the mechanical properties of structures. This work studies the impact of porosity on mechanical properties by considering concrete a multiphase material. A micromechanical model is applied in which the material is considered to consist of two phases: a solid matrix and pores. From this method, a set of expressions is obtained that relates the acoustic velocity and Young's modulus of mortar. Experimental work is based on non-destructive and destructive procedures over mortar samples whose porosity is varied. A comparison is drawn between micromechanical and standard methods, showing positive results for the method here proposed.

  3. Development of a primary standard for dynamic pressure based on drop weight method covering a range of 10 MPa-400 MPa

    NASA Astrophysics Data System (ADS)

    Salminen, J.; Högström, R.; Saxholm, S.; Lakka, A.; Riski, K.; Heinonen, M.

    2018-04-01

    In this paper we present the development of a primary standard for dynamic pressures that is based on the drop weight method. At the moment dynamic pressure transducers are typically calibrated using reference transducers, which are calibrated against static pressure standards. Because dynamic and static characteristics of pressure transducers may significantly differ from each other, it is important that these transducers are calibrated against dynamic pressure standards. In a method developed in VTT Technical Research Centre of Finland Ltd, Centre for Metrology MIKES, a pressure pulse is generated by impact between a dropping weight and a piston of a liquid-filled piston-cylinder assembly. The traceability to SI-units is realized through interferometric measurement of the acceleration of the dropping weight during impact, the effective area of the piston-cylinder assembly and the mass of the weight. Based on experimental validation and an uncertainty evaluation, the developed primary standard provides traceability for peak pressures in the range from 10 MPa to 400 MPa with a few millisecond pulse width and a typical relative expanded uncertainty (k  =  2) of 1.5%. The performance of the primary standard is demonstrated by test calibrations of two dynamic pressure transducers.

  4. Biodegradability standards for carrier bags and plastic films in aquatic environments: a critical review

    PubMed Central

    Boardman, Carl; O'Callaghan, Kenneth; Delort, Anne-Marie; Song, Jim

    2018-01-01

    Plastic litter is encountered in aquatic ecosystems across the globe, including polar environments and the deep sea. To mitigate the adverse societal and ecological impacts of this waste, there has been debate on whether ‘biodegradable' materials should be granted exemptions from plastic bag bans and levies. However, great care must be exercised when attempting to define this term, due to the broad and complex range of physical and chemical conditions encountered within natural ecosystems. Here, we review existing international industry standards and regional test methods for evaluating the biodegradability of plastics within aquatic environments (wastewater, unmanaged freshwater and marine habitats). We argue that current standards and test methods are insufficient in their ability to realistically predict the biodegradability of carrier bags in these environments, due to several shortcomings in experimental procedures and a paucity of information in the scientific literature. Moreover, existing biodegradability standards and test methods for aquatic environments do not involve toxicity testing or account for the potentially adverse ecological impacts of carrier bags, plastic additives, polymer degradation products or small (microscopic) plastic particles that can arise via fragmentation. Successfully addressing these knowledge gaps is a key requirement for developing new biodegradability standard(s) for lightweight carrier bags. PMID:29892374

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Faye, Sherry A.; Richards, Jason M.; Gallardo, Athena M.

    Sequential extraction is a useful technique for assessing the potential to leach actinides from soils; however, current literature lacks uniformity in experimental details, making direct comparison of results impossible. This work continued development toward a standardized five-step sequential extraction protocol by analyzing extraction behaviors of 232Th, 238U, 239,240Pu and 241Am from lake and ocean sediment reference materials. Results produced a standardized procedure after creating more defined reaction conditions to improve method repeatability. A NaOH fusion procedure is recommended following sequential leaching for the complete dissolution of insoluble species.

  6. Chemical vapor deposition growth

    NASA Technical Reports Server (NTRS)

    Ruth, R. P.; Manasevit, H. M.; Kenty, J. L.; Moudy, L. A.; Simpson, W. I.; Yang, J. J.

    1976-01-01

    The chemical vapor deposition (CVD) method for the growth of Si sheet on inexpensive substrate materials is investigated. The objective is to develop CVD techniques for producing large areas of Si sheet on inexpensive substrate materials, with sheet properties suitable for fabricating solar cells meeting the technical goals of the Low Cost Silicon Solar Array Project. Specific areas covered include: (1) modification and test of existing CVD reactor system; (2) identification and/or development of suitable inexpensive substrate materials; (3) experimental investigation of CVD process parameters using various candidate substrate materials; (4) preparation of Si sheet samples for various special studies, including solar cell fabrication; (5) evaluation of the properties of the Si sheet material produced by the CVD process; and (6) fabrication and evaluation of experimental solar cell structures, using standard and near-standard processing techniques.

  7. Strain-Specific Induction of Experimental Autoimmune Prostatitis (EAP) in Mice

    PubMed Central

    Jackson, Christopher M.; Flies, Dallas B.; Mosse, Claudio A.; Parwani, Anil; Hipkiss, Edward L.; Drake, Charles G.

    2013-01-01

    BACKGROUND Prostatitis, a clinical syndrome characterized by pelvic pain and inflammation, is common in adult males. Although several induced and spontaneous murine models of prostatitis have been explored, the role of genetic background on induction has not been well-defined. METHODS Using a standard methodology for the induction of experimental autoimmune prostatitis (EAP), we investigated both acute and chronic inflammation on several murine genetic backgrounds. RESULTS In our colony, nonobese diabetic (NOD) mice evinced spontaneous prostatitis that was not augmented by immunization with rat prostate extract (RPE). In contrast, the standard laboratory strain Balb/c developed chronic inflammation in response to RPE immunization. Development of EAP in other strains was variable. CONCLUSIONS These data suggest that Balb/c mice injected with RPE may provide a useful model for chronic prostatic inflammation. PMID:23129407

  8. Considerations on the quantitative analysis of apparent amorphicity of milled lactose by Raman spectroscopy.

    PubMed

    Pazesh, Samaneh; Lazorova, Lucia; Berggren, Jonas; Alderborn, Göran; Gråsjö, Johan

    2016-09-10

    The main purpose of the study was to evaluate various pre-processing and quantification approaches of Raman spectrum to quantify low level of amorphous content in milled lactose powder. To improve the quantification analysis, several spectral pre-processing methods were used to adjust background effects. The effects of spectral noise on the variation of determined amorphous content were also investigated theoretically by propagation of error analysis and were compared to the experimentally obtained values. Additionally, the applicability of calibration method with crystalline or amorphous domains in the estimation of amorphous content in milled lactose powder was discussed. Two straight baseline pre-processing methods gave the best and almost equal performance. By the succeeding quantification methods, PCA performed best, although the classical least square analysis (CLS) gave comparable results, while peak parameter analysis displayed to be inferior. The standard deviations of experimental determined percentage amorphous content were 0.94% and 0.25% for pure crystalline and pure amorphous samples respectively, which was very close to the standard deviation values from propagated spectral noise. The reasonable conformity between the milled samples spectra and synthesized spectra indicated representativeness of physical mixtures with crystalline or amorphous domains in the estimation of apparent amorphous content in milled lactose. Copyright © 2016 The Author(s). Published by Elsevier B.V. All rights reserved.

  9. School Programs Targeting Stress Management in Children and Adolescents: A Meta-Analysis

    ERIC Educational Resources Information Center

    Kraag, Gerda; Zeegers, Maurice P.; Kok, Gerjo; Hosman, Clemens; Abu-Saad, Huda Huijer

    2006-01-01

    Introduction: This meta-analysis evaluates the effect of school programs targeting stress management or coping skills in school children. Methods: Articles were selected through a systematic literature search. Only randomized controlled trials or quasi-experimental studies were included. The standardized mean differences (SMDs) between baseline…

  10. Important Publications in the Area of Photovoltaic Performance |

    Science.gov Websites

    , 2011, DOI: 978-0-12-385934-1. Photoelectrochemical Water Splitting: Standards, Experimental Methods Energy Systems Testing, Solar Energy 73, 443-467 (2002). D.R. Myers, K. Emery, and C. Gueymard, Revising Performance Evaluation Methodologies for Energy Ratings," Proc. 24th IEEE Photovoltaic Specialists Conf

  11. Crystallographic Determination of Molecular Parameters for K2SiF6: A Physical Chemistry Laboratory Experiment.

    ERIC Educational Resources Information Center

    Loehlin, James H.; Norton, Alexandra P.

    1988-01-01

    Describes a crystallography experiment using both diffraction-angle and diffraction-intensity information to determine the lattice constant and a lattice independent molecular parameter, while still employing standard X-ray powder diffraction techniques. Details the method, experimental details, and analysis for this activity. (CW)

  12. Testing Bernoulli's Law

    ERIC Educational Resources Information Center

    Ivanov, Dragia; Nikolov, Stefan; Petrova, Hristina

    2014-01-01

    In this paper we present three different methods for testing Bernoulli's law that are different from the standard "tube with varying cross-section." They are all applicable to high-school level physics education, with varying levels of theoretical and experimental complexity, depending on students' skills, and may even be…

  13. Pupils' Cognitive Activity Stimulation by Means of Physical Training

    ERIC Educational Resources Information Center

    Nekhoroshkov, Anatolij V.

    2016-01-01

    The article presents the research results of the physical activity influence on the intellectual performance of high school students. The methods of experiments and standardized observation were used. The efficiency of the cognitive activity was assessed by "Proof test" technique of B. Burdon. Within the experimental class, the program…

  14. Decoding Dynamic Brain Patterns from Evoked Responses: A Tutorial on Multivariate Pattern Analysis Applied to Time Series Neuroimaging Data.

    PubMed

    Grootswagers, Tijl; Wardle, Susan G; Carlson, Thomas A

    2017-04-01

    Multivariate pattern analysis (MVPA) or brain decoding methods have become standard practice in analyzing fMRI data. Although decoding methods have been extensively applied in brain-computer interfaces, these methods have only recently been applied to time series neuroimaging data such as MEG and EEG to address experimental questions in cognitive neuroscience. In a tutorial style review, we describe a broad set of options to inform future time series decoding studies from a cognitive neuroscience perspective. Using example MEG data, we illustrate the effects that different options in the decoding analysis pipeline can have on experimental results where the aim is to "decode" different perceptual stimuli or cognitive states over time from dynamic brain activation patterns. We show that decisions made at both preprocessing (e.g., dimensionality reduction, subsampling, trial averaging) and decoding (e.g., classifier selection, cross-validation design) stages of the analysis can significantly affect the results. In addition to standard decoding, we describe extensions to MVPA for time-varying neuroimaging data including representational similarity analysis, temporal generalization, and the interpretation of classifier weight maps. Finally, we outline important caveats in the design and interpretation of time series decoding experiments.

  15. Cutting thread at flexible endoscopy.

    PubMed

    Gong, F; Swain, P; Kadirkamanathan, S; Hepworth, C; Laufer, J; Shelton, J; Mills, T

    1996-12-01

    New thread-cutting techniques were developed for use at flexible endoscopy. A guillotine was designed to follow and cut thread at the endoscope tip. A new method was developed for guiding suture cutters. Efficacy of Nd: YAG laser cutting of threads was studied. Experimental and clinical experience with thread-cutting methods is presented. A 2.4 mm diameter flexible thread-cutting guillotine was constructed featuring two lateral holes with sharp edges through which sutures to be cut are passed. Standard suture cutters were guided by backloading thread through the cutters extracorporeally. A snare cutter was constructed to retrieve objects sewn to tissue. Efficacy and speed of Nd: YAG laser in cutting twelve different threads were studied. The guillotine cut thread faster (p < 0.05) than standard suture cutters. Backloading thread shortened time taken to cut thread (p < 0.001) compared with free-hand cutting. Nd: YAG laser was ineffective in cutting uncolored threads and slower than mechanical cutters. Results of thread cutting in clinical studies using sewing machine (n = 77 cutting episodes in 21 patients), in-vivo experiments (n = 156), and postsurgical cases (n = 15 over 15 years) are presented. New thread-cutting methods are described and their efficacy demonstrated in experimental and clinical studies.

  16. Numerical simulation and experimental investigation about internal and external flows†

    NASA Astrophysics Data System (ADS)

    Wang, Tao; Yang, Guowei; Huang, Guojun; Zhou, Liandi

    2006-06-01

    In this paper, TASCflow3D is used to solve inner and outer 3D viscous incompressible turbulent flow (Re=5.6×106) around axisymmetric body with duct. The governing equation is a RANS equation with standard k ɛ turbulence model. The discrete method used is a finite volume method based on the finite element approach. In this method, the description of geometry is very flexible and at the same time important conservative properties are retained. The multi-block and algebraic multi-grid techniques are used for the convergence acceleration. Agreement between experimental results and calculation is good. It indicates that this novel approach can be used to simulate complex flow such as the interaction between rotor and stator or propulsion systems containing tip clearance and cavitation.

  17. QSAR Methods.

    PubMed

    Gini, Giuseppina

    2016-01-01

    In this chapter, we introduce the basis of computational chemistry and discuss how computational methods have been extended to some biological properties and toxicology, in particular. Since about 20 years, chemical experimentation is more and more replaced by modeling and virtual experimentation, using a large core of mathematics, chemistry, physics, and algorithms. Then we see how animal experiments, aimed at providing a standardized result about a biological property, can be mimicked by new in silico methods. Our emphasis here is on toxicology and on predicting properties through chemical structures. Two main streams of such models are available: models that consider the whole molecular structure to predict a value, namely QSAR (Quantitative Structure Activity Relationships), and models that find relevant substructures to predict a class, namely SAR. The term in silico discovery is applied to chemical design, to computational toxicology, and to drug discovery. We discuss how the experimental practice in biological science is moving more and more toward modeling and simulation. Such virtual experiments confirm hypotheses, provide data for regulation, and help in designing new chemicals.

  18. Interaction between IGFBP7 and insulin: a theoretical and experimental study

    NASA Astrophysics Data System (ADS)

    Ruan, Wenjing; Kang, Zhengzhong; Li, Youzhao; Sun, Tianyang; Wang, Lipei; Liang, Lijun; Lai, Maode; Wu, Tao

    2016-04-01

    Insulin-like growth factor binding protein 7 (IGFBP7) can bind to insulin with high affinity which inhibits the early steps of insulin action. Lack of recognition mechanism impairs our understanding of insulin regulation before it binds to insulin receptor. Here we combine computational simulations with experimental methods to investigate the interaction between IGFBP7 and insulin. Molecular dynamics simulations indicated that His200 and Arg198 in IGFBP7 were key residues. Verified by experimental data, the interaction remained strong in single mutation systems R198E and H200F but became weak in double mutation system R198E-H200F relative to that in wild-type IGFBP7. The results and methods in present study could be adopted in future research of discovery of drugs by disrupting protein-protein interactions in insulin signaling. Nevertheless, the accuracy, reproducibility, and costs of free-energy calculation are still problems that need to be addressed before computational methods can become standard binding prediction tools in discovery pipelines.

  19. Analysis of statistical and standard algorithms for detecting muscle onset with surface electromyography.

    PubMed

    Tenan, Matthew S; Tweedell, Andrew J; Haynes, Courtney A

    2017-01-01

    The timing of muscle activity is a commonly applied analytic method to understand how the nervous system controls movement. This study systematically evaluates six classes of standard and statistical algorithms to determine muscle onset in both experimental surface electromyography (EMG) and simulated EMG with a known onset time. Eighteen participants had EMG collected from the biceps brachii and vastus lateralis while performing a biceps curl or knee extension, respectively. Three established methods and three statistical methods for EMG onset were evaluated. Linear envelope, Teager-Kaiser energy operator + linear envelope and sample entropy were the established methods evaluated while general time series mean/variance, sequential and batch processing of parametric and nonparametric tools, and Bayesian changepoint analysis were the statistical techniques used. Visual EMG onset (experimental data) and objective EMG onset (simulated data) were compared with algorithmic EMG onset via root mean square error and linear regression models for stepwise elimination of inferior algorithms. The top algorithms for both data types were analyzed for their mean agreement with the gold standard onset and evaluation of 95% confidence intervals. The top algorithms were all Bayesian changepoint analysis iterations where the parameter of the prior (p0) was zero. The best performing Bayesian algorithms were p0 = 0 and a posterior probability for onset determination at 60-90%. While existing algorithms performed reasonably, the Bayesian changepoint analysis methodology provides greater reliability and accuracy when determining the singular onset of EMG activity in a time series. Further research is needed to determine if this class of algorithms perform equally well when the time series has multiple bursts of muscle activity.

  20. The characteristics of bioethanol fuel made of vegetable raw materials

    NASA Astrophysics Data System (ADS)

    Muhaji; Sutjahjo, D. H.

    2018-01-01

    The aim of this research is to identify the most potential vegetable raw as the material to make a bioethanol fuel as the alternative energy for gasoline. This study used experimental method. The high-level bioethanol was obtained through the process of saccharification, fermentation and stratified distillation. ASTM standards were used as the method of testing the chemical element (D 5501, D 1744, D 1688, D 512, D 2622, D 381), and physical test (D 1613, D 240, D 1298-99, D 445, and D 93). The result of the analysis showed that from the seven bioethanols being studied there is one bioethanol from Saccharum of icinarum linn that has physical and chemical properties close to the standard of bioethanol. Meanwhile, the others only meet some of the physical and chemical properties of the standard bioethanol.

  1. Face recognition using slow feature analysis and contourlet transform

    NASA Astrophysics Data System (ADS)

    Wang, Yuehao; Peng, Lingling; Zhe, Fuchuan

    2018-04-01

    In this paper we propose a novel face recognition approach based on slow feature analysis (SFA) in contourlet transform domain. This method firstly use contourlet transform to decompose the face image into low frequency and high frequency part, and then takes technological advantages of slow feature analysis for facial feature extraction. We named the new method combining the slow feature analysis and contourlet transform as CT-SFA. The experimental results on international standard face database demonstrate that the new face recognition method is effective and competitive.

  2. Proceedings of the Ship Production Symposium Held in Williamsburg, Virginia on November 1-4, 1993

    DTIC Science & Technology

    1993-11-01

    June 17, 1993. ‘FORAN V30, The Way to CIM from Conceptual Design in Shipbuilding,’ Senermar, Sener Sistemas Marines, S. A., Madrid, Spain. Welsh, M., J...by Crews and Hardrath (7) in the “ Companion Specimen Method - Equal Deformation Equal Life Concept.” The main assumption in their method is that the... Companion Specimen Method,” Experimental Mechanics, V. 23, pp. 313-320, 1966. ASTM E606-80, “Standard Recommended Practice for Constant-Amplitute Low Cycle

  3. Experimental progress in positronium laser physics

    NASA Astrophysics Data System (ADS)

    Cassidy, David B.

    2018-03-01

    The field of experimental positronium physics has advanced significantly in the last few decades, with new areas of research driven by the development of techniques for trapping and manipulating positrons using Surko-type buffer gas traps. Large numbers of positrons (typically ≥106) accumulated in such a device may be ejected all at once, so as to generate an intense pulse. Standard bunching techniques can produce pulses with ns (mm) temporal (spatial) beam profiles. These pulses can be converted into a dilute Ps gas in vacuum with densities on the order of 107 cm-3 which can be probed by standard ns pulsed laser systems. This allows for the efficient production of excited Ps states, including long-lived Rydberg states, which in turn facilitates numerous experimental programs, such as precision optical and microwave spectroscopy of Ps, the application of Stark deceleration methods to guide, decelerate and focus Rydberg Ps beams, and studies of the interactions of such beams with other atomic and molecular species. These methods are also applicable to antihydrogen production and spectroscopic studies of energy levels and resonances in positronium ions and molecules. A summary of recent progress in this area will be given, with the objective of providing an overview of the field as it currently exists, and a brief discussion of some future directions.

  4. Metrology of vibration measurements by laser techniques

    NASA Astrophysics Data System (ADS)

    von Martens, Hans-Jürgen

    2008-06-01

    Metrology as the art of careful measurement has been understood as uniform methodology for measurements in natural sciences, covering methods for the consistent assessment of experimental data and a corpus of rules regulating application in technology and in trade and industry. The knowledge, methods and tools available for precision measurements can be exploited for measurements at any level of uncertainty in any field of science and technology. A metrological approach to the preparation, execution and evaluation (including expression of uncertainty) of measurements of translational and rotational motion quantities using laser interferometer methods and techniques will be presented. The realization and dissemination of the SI units of motion quantities (vibration and shock) have been based on laser interferometer methods specified in international documentary standards. New and upgraded ISO standards are reviewed with respect to their suitability for ensuring traceable vibration measurements and calibrations in an extended frequency range of 0.4 Hz to higher than 100 kHz. Using adequate vibration exciters to generate sufficient displacement or velocity amplitudes, the upper frequency limits of the laser interferometer methods specified in ISO 16063-11 for frequencies <= 10 kHz can be expanded to 100 kHz and beyond. A comparison of different methods simultaneously used for vibration measurements at 100 kHz will be demonstrated. A statistical analysis of numerous experimental results proves the highest accuracy achievable currently in vibration measurements by specific laser methods, techniques and procedures (i.e. measurement uncertainty 0.05 % at frequencies <= 10 kHz, <= 1 % up to 100 kHz).

  5. Efficient composite broadband polarization retarders and polarization filters

    NASA Astrophysics Data System (ADS)

    Dimova, E.; Ivanov, S. S.; Popkirov, G.; Vitanov, N. V.

    2014-12-01

    A new type of broadband polarization half-wave retarder and narrowband polarization filters are described and experimentally tested. Both, the retarders and the filters are designed as composite stacks of standard optical half-wave plates, each of them twisted at specific angles. The theoretical background of the proposed optical devices was obtained by analogy with the method of composite pulses, known from the nuclear and quantum physics. We show that combining two composite filters built from different numbers and types of waveplates, the transmission spectrum is reduced from about 700 nm to about 10 nm width.We experimentally demonstrate that this method can be applied to different types of waveplates (broadband, zero-order, multiple order, etc.).

  6. Detecting circular RNAs: bioinformatic and experimental challenges

    PubMed Central

    Szabo, Linda; Salzman, Julia

    2017-01-01

    The pervasive expression of circular RNAs (circRNAs) is a recently discovered feature of gene expression in highly diverged eukaryotes. Numerous algorithms that are used to detect genome-wide circRNA expression from RNA sequencing (RNA-seq) data have been developed in the past few years, but there is little overlap in their predictions and no clear gold-standard method to assess the accuracy of these algorithms. We review sources of experimental and bioinformatic biases that complicate the accurate discovery of circRNAs and discuss statistical approaches to address these biases. We conclude with a discussion of the current experimental progress on the topic. PMID:27739534

  7. Fuzzy method of recognition of high molecular substances in evidence-based biology

    NASA Astrophysics Data System (ADS)

    Olevskyi, V. I.; Smetanin, V. T.; Olevska, Yu. B.

    2017-10-01

    Nowadays modern requirements to achieving reliable results along with high quality of researches put mathematical analysis methods of results at the forefront. Because of this, evidence-based methods of processing experimental data have become increasingly popular in the biological sciences and medicine. Their basis is meta-analysis, a method of quantitative generalization of a large number of randomized trails contributing to a same special problem, which are often contradictory and performed by different authors. It allows identifying the most important trends and quantitative indicators of the data, verification of advanced hypotheses and discovering new effects in the population genotype. The existing methods of recognizing high molecular substances by gel electrophoresis of proteins under denaturing conditions are based on approximate methods for comparing the contrast of electrophoregrams with a standard solution of known substances. We propose a fuzzy method for modeling experimental data to increase the accuracy and validity of the findings of the detection of new proteins.

  8. Enhancing efficiency and quality of statistical estimation of immunogenicity assay cut points through standardization and automation.

    PubMed

    Su, Cheng; Zhou, Lei; Hu, Zheng; Weng, Winnie; Subramani, Jayanthi; Tadkod, Vineet; Hamilton, Kortney; Bautista, Ami; Wu, Yu; Chirmule, Narendra; Zhong, Zhandong Don

    2015-10-01

    Biotherapeutics can elicit immune responses, which can alter the exposure, safety, and efficacy of the therapeutics. A well-designed and robust bioanalytical method is critical for the detection and characterization of relevant anti-drug antibody (ADA) and the success of an immunogenicity study. As a fundamental criterion in immunogenicity testing, assay cut points need to be statistically established with a risk-based approach to reduce subjectivity. This manuscript describes the development of a validated, web-based, multi-tier customized assay statistical tool (CAST) for assessing cut points of ADA assays. The tool provides an intuitive web interface that allows users to import experimental data generated from a standardized experimental design, select the assay factors, run the standardized analysis algorithms, and generate tables, figures, and listings (TFL). It allows bioanalytical scientists to perform complex statistical analysis at a click of the button to produce reliable assay parameters in support of immunogenicity studies. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. The secret lives of experiments: methods reporting in the fMRI literature.

    PubMed

    Carp, Joshua

    2012-10-15

    Replication of research findings is critical to the progress of scientific understanding. Accordingly, most scientific journals require authors to report experimental procedures in sufficient detail for independent researchers to replicate their work. To what extent do research reports in the functional neuroimaging literature live up to this standard? The present study evaluated methods reporting and methodological choices across 241 recent fMRI articles. Many studies did not report critical methodological details with regard to experimental design, data acquisition, and analysis. Further, many studies were underpowered to detect any but the largest statistical effects. Finally, data collection and analysis methods were highly flexible across studies, with nearly as many unique analysis pipelines as there were studies in the sample. Because the rate of false positive results is thought to increase with the flexibility of experimental designs, the field of functional neuroimaging may be particularly vulnerable to false positives. In sum, the present study documented significant gaps in methods reporting among fMRI studies. Improved methodological descriptions in research reports would yield significant benefits for the field. Copyright © 2012 Elsevier Inc. All rights reserved.

  10. Predicting gaseous reaction rates of short chain chlorinated paraffins with ·OH: overcoming the difficulty in experimental determination.

    PubMed

    Li, Chao; Xie, Hong-Bin; Chen, Jingwen; Yang, Xianhai; Zhang, Yifei; Qiao, Xianliang

    2014-12-02

    Short chain chlorinated paraffins (SCCPs) are under evaluation for inclusion in the Stockholm Convention on persistent organic pollutants. However, information on their reaction rate constants with gaseous ·OH (kOH) is unavailable, limiting the evaluation of their persistence in the atmosphere. Experimental determination of kOH is confined by the unavailability of authentic chemical standards for some SCCP congeners. In this study, we evaluated and selected density functional theory (DFT) methods to predict kOH of SCCPs, by comparing the experimental kOH values of six polychlorinated alkanes (PCAs) with those calculated by the different theoretical methods. We found that the M06-2X/6-311+G(3df,2pd)//B3LYP/6-311 +G(d,p) method is time-effective and can be used to predict kOH of PCAs. Moreover, based on the calculated kOH of nine SCCPs and available experimental kOH values of 22 PCAs with low carbon chain, a quantitative structure-activity relationship (QSAR) model was developed. The molecular structural characteristics determining the ·OH reaction rate were discussed. logkOH was found to negatively correlate with the percentage of chlorine substitutions (Cl%). The DFT calculation method and the QSAR model are important alternatives to the conventional experimental determination of kOH for SCCPs, and are prospective in predicting their persistence in the atmosphere.

  11. An accurate computational method for the diffusion regime verification

    NASA Astrophysics Data System (ADS)

    Zhokh, Alexey A.; Strizhak, Peter E.

    2018-04-01

    The diffusion regime (sub-diffusive, standard, or super-diffusive) is defined by the order of the derivative in the corresponding transport equation. We develop an accurate computational method for the direct estimation of the diffusion regime. The method is based on the derivative order estimation using the asymptotic analytic solutions of the diffusion equation with the integer order and the time-fractional derivatives. The robustness and the computational cheapness of the proposed method are verified using the experimental methane and methyl alcohol transport kinetics through the catalyst pellet.

  12. Development of a potentiometric EDTA method for determination of molybdenum. Use of the analysis for molybdenite concentrates

    NASA Technical Reports Server (NTRS)

    Khristova, R.; Vanmen, M.

    1986-01-01

    Based on considerations of principles and experimental data, the interference of sulfate ions in poteniometric titration of EDTA with FeCl3 was confirmed. The method of back complexometric titration of molybdenum of Nonova and Gasheva was improved by replacing hydrazine sulfate with hydrazine hydrochloride for reduction of Mo(VI) to Mo(V). The method can be used for one to tenths of mg of molybdenum with 0.04 mg standard deviation. The specific method of determination of molybdenum in molybdenite concentrates is presented.

  13. Experimental uncertainty and drag measurements in the national transonic facility

    NASA Technical Reports Server (NTRS)

    Batill, Stephen M.

    1994-01-01

    This report documents the results of a study which was conducted in order to establish a framework for the quantitative description of the uncertainty in measurements conducted in the National Transonic Facility (NTF). The importance of uncertainty analysis in both experiment planning and reporting results has grown significantly in the past few years. Various methodologies have been proposed and the engineering community appears to be 'converging' on certain accepted practices. The practical application of these methods to the complex wind tunnel testing environment at the NASA Langley Research Center was based upon terminology and methods established in the American National Standards Institute (ANSI) and the American Society of Mechanical Engineers (ASME) standards. The report overviews this methodology.

  14. FFT-enhanced IHS transform method for fusing high-resolution satellite images

    USGS Publications Warehouse

    Ling, Y.; Ehlers, M.; Usery, E.L.; Madden, M.

    2007-01-01

    Existing image fusion techniques such as the intensity-hue-saturation (IHS) transform and principal components analysis (PCA) methods may not be optimal for fusing the new generation commercial high-resolution satellite images such as Ikonos and QuickBird. One problem is color distortion in the fused image, which causes visual changes as well as spectral differences between the original and fused images. In this paper, a fast Fourier transform (FFT)-enhanced IHS method is developed for fusing new generation high-resolution satellite images. This method combines a standard IHS transform with FFT filtering of both the panchromatic image and the intensity component of the original multispectral image. Ikonos and QuickBird data are used to assess the FFT-enhanced IHS transform method. Experimental results indicate that the FFT-enhanced IHS transform method may improve upon the standard IHS transform and the PCA methods in preserving spectral and spatial information. ?? 2006 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS).

  15. Automatic Recognition of Fetal Facial Standard Plane in Ultrasound Image via Fisher Vector.

    PubMed

    Lei, Baiying; Tan, Ee-Leng; Chen, Siping; Zhuo, Liu; Li, Shengli; Ni, Dong; Wang, Tianfu

    2015-01-01

    Acquisition of the standard plane is the prerequisite of biometric measurement and diagnosis during the ultrasound (US) examination. In this paper, a new algorithm is developed for the automatic recognition of the fetal facial standard planes (FFSPs) such as the axial, coronal, and sagittal planes. Specifically, densely sampled root scale invariant feature transform (RootSIFT) features are extracted and then encoded by Fisher vector (FV). The Fisher network with multi-layer design is also developed to extract spatial information to boost the classification performance. Finally, automatic recognition of the FFSPs is implemented by support vector machine (SVM) classifier based on the stochastic dual coordinate ascent (SDCA) algorithm. Experimental results using our dataset demonstrate that the proposed method achieves an accuracy of 93.27% and a mean average precision (mAP) of 99.19% in recognizing different FFSPs. Furthermore, the comparative analyses reveal the superiority of the proposed method based on FV over the traditional methods.

  16. Recommendations for reference method for haemoglobinometry in human blood (ICSH standard 1986) and specifications for international haemiglobincyanide reference preparation (3rd edition). International Committee for Standardization in Haematology; Expert Panel on Haemoglobinometry.

    PubMed

    1987-01-01

    Scientific symposia on haemoglobinometry were held at the 9th Congress of the European Society of Haematology, Lisbon, 1963 (ESH 1964) and the 10th Congress of the International Society of Haematology (ISH), Stockholm, 1964 (ISH 1965). The International Committee for Standardization in Haematology (ICSH) made recommendations endorsed by the General Assembly of ICSH in Sydney on 23 August 1966 (ICSH 1967), for a reference method for haemoglobinometry and for the manufacture and distribution of an international reference preparation. Further symposia were held at the 12th Congress of the ISH, New York, 1968 (Astaldi, Sirtori & Vanzetti 1979) and at the 13th Congress of ISH, Munich, 1970 (Izak & Lewis 1972). The recommendations were reissued in 1978 (ISH 1978). On the basis of continuing experimental studies, the reference method and the specifications for the international reference preparation have been modified. The revised recommendations are described in this document.

  17. A Novel Application for 222Rn Emanation Standards

    PubMed Central

    Laureano-Perez, L.; Collé, R.; Jacobson, D.R.; Fitzgerald, R.; Khan, N.S.; Dmochowski, I.J.

    2013-01-01

    In collaboration with the University of Pennsylvania, a 222Rn emanation source was used for the determination of the binding affinity of radon to a cryptophane molecular host. This source was similar to a 222Rn emanation standard that was developed and disseminated by the National Institute of Standards and Technology (NIST). The novel experimental design involved performing the reactions at femtomole levels, developing exacting gravimetric sampling methods and making precise 222Rn assays by liquid scintillation counting. A cryptophane-radon association constant was determined, KA = (49,000 ± 12,000) L· mol−1 at 293 K, which was the first measurement of radon binding to a molecular host. PMID:22455833

  18. A new experimental method to determine the sorption isotherm of a liquid in a porous medium.

    PubMed

    Ouoba, Samuel; Cherblanc, Fabien; Cousin, Bruno; Bénet, Jean-Claude

    2010-08-01

    Sorption from the vapor phase is an important factor controlling the transport of volatile organic compounds (VOCs) in the vadose zone. Therefore, an accurate description of sorption behavior is essential to predict the ultimate fate of contaminants. Several measurement techniques are available in the case of water, however, when dealing with VOCs, the determination of sorption characteristics generally relies on gas chromatography. To avoid some drawbacks associated with this technology, we propose a new method to determine the sorption isotherm of any liquid compounds adsorbed in a soil. This method is based on standard and costless transducers (gas pressure, temperature) leading to a simple and transportable experimental device. A numerical estimation underlines the good accuracy and this technique is validated on two examples. Finally, this method is applied to determine the sorption isotherm of three liquid compounds (water, heptane, and trichloroethylene) in a clayey soil.

  19. Prediction of Enzyme Mutant Activity Using Computational Mutagenesis and Incremental Transduction

    PubMed Central

    Basit, Nada; Wechsler, Harry

    2011-01-01

    Wet laboratory mutagenesis to determine enzyme activity changes is expensive and time consuming. This paper expands on standard one-shot learning by proposing an incremental transductive method (T2bRF) for the prediction of enzyme mutant activity during mutagenesis using Delaunay tessellation and 4-body statistical potentials for representation. Incremental learning is in tune with both eScience and actual experimentation, as it accounts for cumulative annotation effects of enzyme mutant activity over time. The experimental results reported, using cross-validation, show that overall the incremental transductive method proposed, using random forest as base classifier, yields better results compared to one-shot learning methods. T2bRF is shown to yield 90% on T4 and LAC (and 86% on HIV-1). This is significantly better than state-of-the-art competing methods, whose performance yield is at 80% or less using the same datasets. PMID:22007208

  20. Experiment Analysis and Modelling of Compaction Behaviour of Ag60Cu30Sn10 Mixed Metal Powders

    NASA Astrophysics Data System (ADS)

    Zhou, Mengcheng; Huang, Shangyu; Liu, Wei; Lei, Yu; Yan, Shiwei

    2018-03-01

    A novel process method combines powder compaction and sintering was employed to fabricate thin sheets of cadmium-free silver based filler metals, the compaction densification behaviour of Ag60Cu30Sn10 mixed metal powders was investigated experimentally. Based on the equivalent density method, the density-dependent Drucker-Prager Cap (DPC) model was introduced to model the powder compaction behaviour. Various experiment procedures were completed to determine the model parameters. The friction coefficients in lubricated and unlubricated die were experimentally determined. The determined material parameters were validated by experiments and numerical simulation of powder compaction process using a user subroutine (USDFLD) in ABAQUS/Standard. The good agreement between the simulated and experimental results indicates that the determined model parameters are able to describe the compaction behaviour of the multicomponent mixed metal powders, which can be further used for process optimization simulations.

  1. ENFIN--A European network for integrative systems biology.

    PubMed

    Kahlem, Pascal; Clegg, Andrew; Reisinger, Florian; Xenarios, Ioannis; Hermjakob, Henning; Orengo, Christine; Birney, Ewan

    2009-11-01

    Integration of biological data of various types and the development of adapted bioinformatics tools represent critical objectives to enable research at the systems level. The European Network of Excellence ENFIN is engaged in developing an adapted infrastructure to connect databases, and platforms to enable both the generation of new bioinformatics tools and the experimental validation of computational predictions. With the aim of bridging the gap existing between standard wet laboratories and bioinformatics, the ENFIN Network runs integrative research projects to bring the latest computational techniques to bear directly on questions dedicated to systems biology in the wet laboratory environment. The Network maintains internally close collaboration between experimental and computational research, enabling a permanent cycling of experimental validation and improvement of computational prediction methods. The computational work includes the development of a database infrastructure (EnCORE), bioinformatics analysis methods and a novel platform for protein function analysis FuncNet.

  2. Two Week Oral Dose Range-Finding Toxicity Study of WR242511 in Rats

    DTIC Science & Technology

    1993-07-08

    Express Clinical Chemistry System IFCC, Committee on Standards, Part 2. IFCC Method for Aspartate Aminotransferase, Amsterdam, Elsevier Scientific...PROCUREMENT INSTRUMENT IDENTIFICATION NUMBER DAM017-92-C-2OO1 8c ADDRESS (City, State, and ZIP Code) Fort Detrick Frederick, MO 21702-5009 10. SOURCE...7 2. INTRODUCTION 7 3. MATERIALS AND METHODS 7 3.1 Test Article 7 3.2 Animals 8 3.3 Experimental Design 8 3.4

  3. A method for the estimate of the wall diffusion for non-axisymmetric fields using rotating external fields

    NASA Astrophysics Data System (ADS)

    Frassinetti, L.; Olofsson, K. E. J.; Fridström, R.; Setiadi, A. C.; Brunsell, P. R.; Volpe, F. A.; Drake, J.

    2013-08-01

    A new method for the estimate of the wall diffusion time of non-axisymmetric fields is developed. The method based on rotating external fields and on the measurement of the wall frequency response is developed and tested in EXTRAP T2R. The method allows the experimental estimate of the wall diffusion time for each Fourier harmonic and the estimate of the wall diffusion toroidal asymmetries. The method intrinsically considers the effects of three-dimensional structures and of the shell gaps. Far from the gaps, experimental results are in good agreement with the diffusion time estimated with a simple cylindrical model that assumes a homogeneous wall. The method is also applied with non-standard configurations of the coil array, in order to mimic tokamak-relevant settings with a partial wall coverage and active coils of large toroidal extent. The comparison with the full coverage results shows good agreement if the effects of the relevant sidebands are considered.

  4. Ab Initio and Improved Empirical Potentials for the Calculation of the Anharmonic Vibrational States and Intramolecular Mode Coupling of N-Methylacetamide

    NASA Technical Reports Server (NTRS)

    Gregurick, Susan K.; Chaban, Galina M.; Gerber, R. Benny; Kwak, Dochou (Technical Monitor)

    2001-01-01

    The second-order Moller-Plesset ab initio electronic structure method is used to compute points for the anharmonic mode-coupled potential energy surface of N-methylacetamide (NMA) in the trans(sub ct) configuration, including all degrees of freedom. The vibrational states and the spectroscopy are directly computed from this potential surface using the Correlation Corrected Vibrational Self-Consistent Field (CC-VSCF) method. The results are compared with CC-VSCF calculations using both the standard and improved empirical Amber-like force fields and available low temperature experimental matrix data. Analysis of our calculated spectroscopic results show that: (1) The excellent agreement between the ab initio CC-VSCF calculated frequencies and the experimental data suggest that the computed anharmonic potentials for N-methylacetamide are of a very high quality; (2) For most transitions, the vibrational frequencies obtained from the ab initio CC-VSCF method are superior to those obtained using the empirical CC-VSCF methods, when compared with experimental data. However, the improved empirical force field yields better agreement with the experimental frequencies as compared with a standard AMBER-type force field; (3) The empirical force field in particular overestimates anharmonic couplings for the amide-2 mode, the methyl asymmetric bending modes, the out-of-plane methyl bending modes, and the methyl distortions; (4) Disagreement between the ab initio and empirical anharmonic couplings is greater than the disagreement between the frequencies, and thus the anharmonic part of the empirical potential seems to be less accurate than the harmonic contribution;and (5) Both the empirical and ab initio CC-VSCF calculations predict a negligible anharmonic coupling between the amide-1 and other internal modes. The implication of this is that the intramolecular energy flow between the amide-1 and the other internal modes may be smaller than anticipated. These results may have important implications for the anharmonic force fields of peptides, for which N-methylacetamide is a model.

  5. Multiplicative effects model with internal standard in mobile phase for quantitative liquid chromatography-mass spectrometry.

    PubMed

    Song, Mi; Chen, Zeng-Ping; Chen, Yao; Jin, Jing-Wen

    2014-07-01

    Liquid chromatography-mass spectrometry assays suffer from signal instability caused by the gradual fouling of the ion source, vacuum instability, aging of the ion multiplier, etc. To address this issue, in this contribution, an internal standard was added into the mobile phase. The internal standard was therefore ionized and detected together with the analytes of interest by the mass spectrometer to ensure that variations in measurement conditions and/or instrument have similar effects on the signal contributions of both the analytes of interest and the internal standard. Subsequently, based on the unique strategy of adding internal standard in mobile phase, a multiplicative effects model was developed for quantitative LC-MS assays and tested on a proof of concept model system: the determination of amino acids in water by LC-MS. The experimental results demonstrated that the proposed method could efficiently mitigate the detrimental effects of continuous signal variation, and achieved quantitative results with average relative predictive error values in the range of 8.0-15.0%, which were much more accurate than the corresponding results of conventional internal standard method based on the peak height ratio and partial least squares method (their average relative predictive error values were as high as 66.3% and 64.8%, respectively). Therefore, it is expected that the proposed method can be developed and extended in quantitative LC-MS analysis of more complex systems. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. [Research strategies in standard decoction of medicinal slices].

    PubMed

    Chen, Shi-Lin; Liu, An; Li, Qi; Toru, Sugita; Zhu, Guang-Wei; Sun, Yi; Dai, Yun-Tao; Zhang, Jun; Zhang, Tie-Jun; Takehisa, Tomoda; Liu, Chang-Xiao

    2016-04-01

    This paper discusses the research situation of the standard decoction of medicinal slices at home and abroad. Combined with the experimental data, the author proposes that the standard decoction of medicinal slices is made of single herb using standard process which should be guided by the theory of traditional Chinese medicine, based on clinical practice and referred to modern extraction method with a standard process. And the author also proposes the principles of establishing the specification of process parameters and quality standards and established the basis of drug efficacy material and biological reference. As a standard material and standard system, the standard decoction of medicinal slices can provide standards for clinical medication, standardize the use of the new type of medicinal slices especially for dispensing granules, which were widely used in clinical. It can ensure the accuracy of drugs and consistency of dose, and to solve current supervision difficulties. Moreover the study of standard decoction of medicinal slices will provide the research on dispensing granules, traditional Chinese medicine prescription standard decoction and couplet medicines standard decoction a useful reference. Copyright© by the Chinese Pharmaceutical Association.

  7. Extension of local front reconstruction method with controlled coalescence model

    NASA Astrophysics Data System (ADS)

    Rajkotwala, A. H.; Mirsandi, H.; Peters, E. A. J. F.; Baltussen, M. W.; van der Geld, C. W. M.; Kuerten, J. G. M.; Kuipers, J. A. M.

    2018-02-01

    The physics of droplet collisions involves a wide range of length scales. This poses a challenge to accurately simulate such flows with standard fixed grid methods due to their inability to resolve all relevant scales with an affordable number of computational grid cells. A solution is to couple a fixed grid method with subgrid models that account for microscale effects. In this paper, we improved and extended the Local Front Reconstruction Method (LFRM) with a film drainage model of Zang and Law [Phys. Fluids 23, 042102 (2011)]. The new framework is first validated by (near) head-on collision of two equal tetradecane droplets using experimental film drainage times. When the experimental film drainage times are used, the LFRM method is better in predicting the droplet collisions, especially at high velocity in comparison with other fixed grid methods (i.e., the front tracking method and the coupled level set and volume of fluid method). When the film drainage model is invoked, the method shows a good qualitative match with experiments, but a quantitative correspondence of the predicted film drainage time with the experimental drainage time is not obtained indicating that further development of film drainage model is required. However, it can be safely concluded that the LFRM coupled with film drainage models is much better in predicting the collision dynamics than the traditional methods.

  8. A method of camera calibration in the measurement process with reference mark for approaching observation space target

    NASA Astrophysics Data System (ADS)

    Zhang, Hua; Zeng, Luan

    2017-11-01

    Binocular stereoscopic vision can be used for space-based space targets near observation. In order to solve the problem that the traditional binocular vision system cannot work normally after interference, an online calibration method of binocular stereo measuring camera with self-reference is proposed. The method uses an auxiliary optical imaging device to insert the image of the standard reference object into the edge of the main optical path and image with the target on the same focal plane, which is equivalent to a standard reference in the binocular imaging optical system; When the position of the system and the imaging device parameters are disturbed, the image of the standard reference will change accordingly in the imaging plane, and the position of the standard reference object does not change. The camera's external parameters can be re-calibrated by the visual relationship of the standard reference object. The experimental results show that the maximum mean square error of the same object can be reduced from the original 72.88mm to 1.65mm when the right camera is deflected by 0.4 degrees and the left camera is high and low with 0.2° rotation. This method can realize the online calibration of binocular stereoscopic vision measurement system, which can effectively improve the anti - jamming ability of the system.

  9. [Study on commercial specification of atractylodes based on Delphi method].

    PubMed

    Wang, Hao; Chen, Li-Xiao; Huang, Lu-Qi; Zhang, Tian-Tian; Li, Ying; Zheng, Yu-Guang

    2016-03-01

    This research adopts "Delphi method" to evaluate atractylodes traditional traits and rank correlation. By using methods of mathematical statistics the relationship of the traditional identification indicators and atractylodes goods rank correlation was analyzed, It is found that the main characteristics affectingatractylodes commodity specifications and grades of main characters wereoil points of transaction,color of transaction,color of surface,grain of transaction,texture of transaction andspoilage. The study points out that the original "seventy-six kinds of medicinal materials commodity specification standards of atractylodes differentiate commodity specification" is not in conformity with the actual market situation, we need to formulate corresponding atractylodes medicinal products specifications and grades.This study combined with experimental results "Delphi method" and the market actual situation, proposed the new draft atractylodes commodity specifications and grades, as the new atractylodes commodity specifications and grades standards. It provides a reference and theoretical basis. Copyright© by the Chinese Pharmaceutical Association.

  10. Standard Reference Line Combined with One-Point Calibration-Free Laser-Induced Breakdown Spectroscopy (CF-LIBS) to Quantitatively Analyze Stainless and Heat Resistant Steel.

    PubMed

    Fu, Hongbo; Wang, Huadong; Jia, Junwei; Ni, Zhibo; Dong, Fengzhong

    2018-01-01

    Due to the influence of major elements' self-absorption, scarce observable spectral lines of trace elements, and relative efficiency correction of experimental system, accurate quantitative analysis with calibration-free laser-induced breakdown spectroscopy (CF-LIBS) is in fact not easy. In order to overcome these difficulties, standard reference line (SRL) combined with one-point calibration (OPC) is used to analyze six elements in three stainless-steel and five heat-resistant steel samples. The Stark broadening and Saha - Boltzmann plot of Fe are used to calculate the electron density and the plasma temperature, respectively. In the present work, we tested the original SRL method, the SRL with the OPC method, and intercept with the OPC method. The final calculation results show that the latter two methods can effectively improve the overall accuracy of quantitative analysis and the detection limits of trace elements.

  11. Crystalline cellulose elastic modulus predicted by atomistic models of uniform deformation and nanoscale indentation

    Treesearch

    Xiawa Wu; Robert J. Moon; Ashlie Martini

    2013-01-01

    The elastic modulus of cellulose Iß in the axial and transverse directions was obtained from atomistic simulations using both the standard uniform deformation approach and a complementary approach based on nanoscale indentation. This allowed comparisons between the methods and closer connectivity to experimental measurement techniques. A reactive...

  12. Experimental method to account for structural compliance in nanoindentation measurements

    Treesearch

    Joseph E. Jakes; Charles R. Frihart; James F. Beecher; Robert J. Moon; D. S. Stone

    2008-01-01

    The standard Oliver–Pharr nanoindentation analysis tacitly assumes that the specimen is structurally rigid and that it is both semi-infinite and homogeneous. Many specimens violate these assumptions. We show that when the specimen flexes or possesses heterogeneities, such as free edges or interfaces between regions of different properties, artifacts arise...

  13. Evaluating the Performance of Repeated Measures Approaches in Replicating Experimental Benchmark Results

    ERIC Educational Resources Information Center

    McConeghy, Kevin; Wing, Coady; Wong, Vivian C.

    2015-01-01

    Randomized experiments have long been established as the gold standard for addressing causal questions. However, experiments are not always feasible or desired, so observational methods are also needed. When multiple observations on the same variable are available, a repeated measures design may be used to assess whether a treatment administered…

  14. Blending and nudging in fluid dynamics: some simple observations

    NASA Astrophysics Data System (ADS)

    Germano, M.

    2017-10-01

    Blending and nudging methods have been recently applied in fluid dynamics, particularly regarding the assimilation of experimental data into the computations. In the paper we formally derive the differential equation associated to blending and compare it to the standard nudging equation. Some simple considerations related to these techniques and their mutual relations are exposed.

  15. Sample Selection in Randomized Experiments: A New Method Using Propensity Score Stratified Sampling

    ERIC Educational Resources Information Center

    Tipton, Elizabeth; Hedges, Larry; Vaden-Kiernan, Michael; Borman, Geoffrey; Sullivan, Kate; Caverly, Sarah

    2014-01-01

    Randomized experiments are often seen as the "gold standard" for causal research. Despite the fact that experiments use random assignment to treatment conditions, units are seldom selected into the experiment using probability sampling. Very little research on experimental design has focused on how to make generalizations to well-defined…

  16. Comparison of hydraulics and particle removal efficiencies in a mixed cell raceway and burrows pond rearing system

    USDA-ARS?s Scientific Manuscript database

    We compared the hydrodynamics of replicate experimental mixed cell and replicate standard Burrows pond rearing systems at the Dworshak National Fish Hatchery, ID, in an effort to identify methods for improved solids removal. We measured and compared the hydraulic residence time, particle removal eff...

  17. Subjective Quality Assessment of Underwater Video for Scientific Applications

    PubMed Central

    Moreno-Roldán, José-Miguel; Luque-Nieto, Miguel-Ángel; Poncela, Javier; Díaz-del-Río, Víctor; Otero, Pablo

    2015-01-01

    Underwater video services could be a key application in the better scientific knowledge of the vast oceanic resources in our planet. However, limitations in the capacity of current available technology for underwater networks (UWSNs) raise the question of the feasibility of these services. When transmitting video, the main constraints are the limited bandwidth and the high propagation delays. At the same time the service performance depends on the needs of the target group. This paper considers the problems of estimations for the Mean Opinion Score (a standard quality measure) in UWSNs based on objective methods and addresses the topic of quality assessment in potential underwater video services from a subjective point of view. The experimental design and the results of a test planned according standardized psychometric methods are presented. The subjects used in the quality assessment test were ocean scientists. Video sequences were recorded in actual exploration expeditions and were processed to simulate conditions similar to those that might be found in UWSNs. Our experimental results show how videos are considered to be useful for scientific purposes even in very low bitrate conditions. PMID:26694400

  18. Subjective Quality Assessment of Underwater Video for Scientific Applications.

    PubMed

    Moreno-Roldán, José-Miguel; Luque-Nieto, Miguel-Ángel; Poncela, Javier; Díaz-del-Río, Víctor; Otero, Pablo

    2015-12-15

    Underwater video services could be a key application in the better scientific knowledge of the vast oceanic resources in our planet. However, limitations in the capacity of current available technology for underwater networks (UWSNs) raise the question of the feasibility of these services. When transmitting video, the main constraints are the limited bandwidth and the high propagation delays. At the same time the service performance depends on the needs of the target group. This paper considers the problems of estimations for the Mean Opinion Score (a standard quality measure) in UWSNs based on objective methods and addresses the topic of quality assessment in potential underwater video services from a subjective point of view. The experimental design and the results of a test planned according standardized psychometric methods are presented. The subjects used in the quality assessment test were ocean scientists. Video sequences were recorded in actual exploration expeditions and were processed to simulate conditions similar to those that might be found in UWSNs. Our experimental results show how videos are considered to be useful for scientific purposes even in very low bitrate conditions.

  19. An innovative method for coordinate measuring machine one-dimensional self-calibration with simplified experimental process.

    PubMed

    Fang, Cheng; Butler, David Lee

    2013-05-01

    In this paper, an innovative method for CMM (Coordinate Measuring Machine) self-calibration is proposed. In contrast to conventional CMM calibration that relies heavily on a high precision reference standard such as a laser interferometer, the proposed calibration method is based on a low-cost artefact which is fabricated with commercially available precision ball bearings. By optimizing the mathematical model and rearranging the data sampling positions, the experimental process and data analysis can be simplified. In mathematical expression, the samples can be minimized by eliminating the redundant equations among those configured by the experimental data array. The section lengths of the artefact are measured at arranged positions, with which an equation set can be configured to determine the measurement errors at the corresponding positions. With the proposed method, the equation set is short of one equation, which can be supplemented by either measuring the total length of the artefact with a higher-precision CMM or calibrating the single point error at the extreme position with a laser interferometer. In this paper, the latter is selected. With spline interpolation, the error compensation curve can be determined. To verify the proposed method, a simple calibration system was set up on a commercial CMM. Experimental results showed that with the error compensation curve uncertainty of the measurement can be reduced to 50%.

  20. Using dust as probes to determine sheath extent and structure

    NASA Astrophysics Data System (ADS)

    Douglass, Angela; Land, V.; Qiao, K.; Matthews, L.; Hyde, T.

    2016-08-01

    Two in situ experimental methods are presented in which dust particles are used to determine the extent of the sheath and gain information about the time-averaged electric force profile within a radio frequency (RF) plasma sheath. These methods are advantageous because they are not only simple and quick to carry out, but they also can be performed using standard dusty plasma experimental equipment. In the first method, dust particles are tracked as they fall through the plasma towards the lower electrode. These trajectories are then used to determine the electric force on the particle as a function of height as well as the extent of the sheath. In the second method, dust particle levitation height is measured across a wide range of RF voltages. Similarities were observed between the two experiments, but in order to understand the underlying physics behind these observations, the same conditions were replicated using a self-consistent fluid model. Through comparison of the fluid model and experimental results, it is shown that the particles exhibiting a levitation height that is independent of RF voltage indicate the sheath edge - the boundary between the quasineutral bulk plasma and the sheath. Therefore, both of these simple and inexpensive, yet effective, methods can be applied across a wide range of experimental parameters in any ground-based RF plasma chamber to gain useful information regarding the sheath, which is needed for interpretation of dusty plasma experiments.

  1. Pre-capture multiplexing improves efficiency and cost-effectiveness of targeted genomic enrichment.

    PubMed

    Shearer, A Eliot; Hildebrand, Michael S; Ravi, Harini; Joshi, Swati; Guiffre, Angelica C; Novak, Barbara; Happe, Scott; LeProust, Emily M; Smith, Richard J H

    2012-11-14

    Targeted genomic enrichment (TGE) is a widely used method for isolating and enriching specific genomic regions prior to massively parallel sequencing. To make effective use of sequencer output, barcoding and sample pooling (multiplexing) after TGE and prior to sequencing (post-capture multiplexing) has become routine. While previous reports have indicated that multiplexing prior to capture (pre-capture multiplexing) is feasible, no thorough examination of the effect of this method has been completed on a large number of samples. Here we compare standard post-capture TGE to two levels of pre-capture multiplexing: 12 or 16 samples per pool. We evaluated these methods using standard TGE metrics and determined the ability to identify several classes of genetic mutations in three sets of 96 samples, including 48 controls. Our overall goal was to maximize cost reduction and minimize experimental time while maintaining a high percentage of reads on target and a high depth of coverage at thresholds required for variant detection. We adapted the standard post-capture TGE method for pre-capture TGE with several protocol modifications, including redesign of blocking oligonucleotides and optimization of enzymatic and amplification steps. Pre-capture multiplexing reduced costs for TGE by at least 38% and significantly reduced hands-on time during the TGE protocol. We found that pre-capture multiplexing reduced capture efficiency by 23 or 31% for pre-capture pools of 12 and 16, respectively. However efficiency losses at this step can be compensated by reducing the number of simultaneously sequenced samples. Pre-capture multiplexing and post-capture TGE performed similarly with respect to variant detection of positive control mutations. In addition, we detected no instances of sample switching due to aberrant barcode identification. Pre-capture multiplexing improves efficiency of TGE experiments with respect to hands-on time and reagent use compared to standard post-capture TGE. A decrease in capture efficiency is observed when using pre-capture multiplexing; however, it does not negatively impact variant detection and can be accommodated by the experimental design.

  2. [Contents of calcium, phosphorus and aluminum in central nervous system, liver and kidney of rabbits with experimental atherosclerosis--scavenger effects of vinpocetine on the deposition of elements].

    PubMed

    Yasui, M; Yano, I; Ota, K; Oshima, A

    1990-04-01

    The aims in this study were designed to clarify the contents of calcium (Ca), phosphorus (P) and aluminum (Al) in central nervous system (CNS), liver and kidney of rabbits with atherosclerosis experimentally induced by cholesterol-rich diet, and investigate scavenger effect of 14-ethoxycarbonyl-(3 alpha, 16 alpha-ethyl)-14,15-eburnamenine (vinpocetine) on the deposition of these elements in CNS and soft tissues of experimental atherosclerosis. Sixteen male rabbits were divided into 4 groups. Each group was fed with standard diet (Group A), standard diet containing 1.5% cholesterol (Group B), standard diet containing 1.5% cholesterol plus oral administration of 3 mg/kg/day vinpocetine (Group C), and standard diet containing 1.5% cholesterol plus administration of 10 mg/kg/day vinpocetine (Group D). After 3 months' feeding, experimental atherosclerosis was produced with a modified method of Kritchevsky et al in rabbits of Groups B, C and D. Blood was collected by cardiocentesis under the anesthesia of ether and then rabbits sacrificed to remove CNS and other tissues. The blood was stood for 1 hour at room temperature and separated by centrifugation at 3000 rpm for 10 min to determine serum total cholesterol, phospholipids, HDL-cholesterol, peroxide lipid, NEFA and calcium levels. Ca, P and Al contents in the frontal lobe, pons, cerebellum, spinal cord, liver and kidney were determined by neutron activation analysis. Ca contents of CNS, liver and kidney in Group B significantly increased than those of Group A (p less than 0.01), and significantly decreased in Groups C and D compared with those of Group B (p less than 0.01).(ABSTRACT TRUNCATED AT 250 WORDS)

  3. An experimental and theoretical method for determination of standard electrode potential for the redox couple diphenyl sulfone/diphenyl sulfide

    NASA Astrophysics Data System (ADS)

    Song, Y. Z.; Wei, K. X.; Lv, J. S.

    2013-12-01

    DFT calculations were performed for diphenyl sulfide and diphenyl sulfone. The electrochemistry of diphenyl sulfide on the gold electrode was investigated by cyclic voltammety and the results show that standard electrode potential for redox couple diphenyl sulfone/diphenyl sulfide is 1.058 V, which is consistent with that of 1.057 calculated at B3LYP/6-31++G( d, p)-IEFPCM level. The front orbit theory and Mulliken charges of molecular explain well on the oxidation of diphenyl sulfide in oxidative desulfurization. According to equilibrium theory the experimental equilibrium constant in the oxidative desulfurization of H2O2, is 1.17 × 1048, which is consistent with the theoretical equilibrium constant is 2.18 × 1048 at B3LYP/6-31++G( d, p)-IEFPCM level.

  4. Noise thermometry with two weakly coupled Bose-Einstein condensates.

    PubMed

    Gati, Rudolf; Hemmerling, Börge; Fölling, Jonas; Albiez, Michael; Oberthaler, Markus K

    2006-04-07

    Here we report on the experimental investigation of thermally induced fluctuations of the relative phase between two Bose-Einstein condensates which are coupled via tunneling. The experimental control over the coupling strength and the temperature of the thermal background allows for the quantitative analysis of the phase fluctuations. Furthermore, we demonstrate the application of these measurements for thermometry in a regime where standard methods fail. With this we confirm that the heat capacity of an ideal Bose gas deviates from that of a classical gas as predicted by the third law of thermodynamics.

  5. Baseline measurement of the noise generated by a short-to-medium range jet transport flying standard ILS approaches and level flyovers

    NASA Technical Reports Server (NTRS)

    Hastings, E. C., Jr.; Shanks, R. E.; Mueller, A. W.

    1975-01-01

    The results of baseline noise flight tests are presented. Data are given for a point 1.85 kilometers (1.0 nautical mile) from the runway threshold, and experimental results of level flyover noise at altitudes of 122 meters (400 feet) and 610 meters (2,000 feet) are also shown for several different power levels. The experimental data are compared with data from other sources and reasonable agreement is noted. A description of the test technique, instrumentation, and data analysis methods is included.

  6. Level Energies, Oscillator Strengths and Lifetimes for Transitions in Pb IV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Colon, C.; Alonso-Medina, A.; Zanon, A.

    2008-10-22

    Oscillator strengths for several lines of astrophysical interest arising from some configurations and some levels radiative lifetimes of Pb IV have been calculated. These values were obtained in intermediate coupling (IC) and using ab initio relativistic Hartree-Fock calculations. We use for the IC calculations the standard method of least square fitting of experimental energy levels by means of computer codes from Cowan. Transition Probabilities and oscillator strengths obtained, although in general agreement with the rare experimental data, do present some noticeable discrepancies that are studied in the text.

  7. Experimental demonstration of distributed feedback semiconductor lasers based on reconstruction-equivalent-chirp technology.

    PubMed

    Li, Jingsi; Wang, Huan; Chen, Xiangfei; Yin, Zuowei; Shi, Yuechun; Lu, Yanqing; Dai, Yitang; Zhu, Hongliang

    2009-03-30

    In this paper we report, to the best of our knowledge, the first experimental realization of distributed feedback (DFB) semiconductor lasers based on reconstruction-equivalent-chirp (REC) technology. Lasers with different lasing wavelengths are achieved simultaneously on one chip, which shows a potential for the REC technology in combination with the photonic integrated circuits (PIC) technology to be a possible method for monolithic integration, in that its fabrication is as powerful as electron beam technology and the cost and time-consuming are almost the same as standard holographic technology.

  8. Low pressure gas flow analysis through an effusive inlet using mass spectrometry

    NASA Technical Reports Server (NTRS)

    Brown, David R.; Brown, Kenneth G.

    1988-01-01

    A mass spectrometric method for analyzing flow past and through an effusive inlet designed for use on the tethered satellite and other entering vehicles is discussed. Source stream concentrations of species in a gaseous mixture are determined using a calibration of measured mass spectral intensities versus source stream pressure for standard gas mixtures and pure gases. Concentrations are shown to be accurate within experimental error. Theoretical explanations for observed mass discrimination effects as they relate to the various flow situations in the effusive inlet and the experimental apparatus are discussed.

  9. Numerical simulation of jet aerodynamics using the three-dimensional Navier-Stokes code PAB3D

    NASA Technical Reports Server (NTRS)

    Pao, S. Paul; Abdol-Hamid, Khaled S.

    1996-01-01

    This report presents a unified method for subsonic and supersonic jet analysis using the three-dimensional Navier-Stokes code PAB3D. The Navier-Stokes code was used to obtain solutions for axisymmetric jets with on-design operating conditions at Mach numbers ranging from 0.6 to 3.0, supersonic jets containing weak shocks and Mach disks, and supersonic jets with nonaxisymmetric nozzle exit geometries. This report discusses computational methods, code implementation, computed results, and comparisons with available experimental data. Very good agreement is shown between the numerical solutions and available experimental data over a wide range of operating conditions. The Navier-Stokes method using the standard Jones-Launder two-equation kappa-epsilon turbulence model can accurately predict jet flow, and such predictions are made without any modification to the published constants for the turbulence model.

  10. Methods, analysis, and the treatment of systematic errors for the electron electric dipole moment search in thorium monoxide

    NASA Astrophysics Data System (ADS)

    Baron, J.; Campbell, W. C.; DeMille, D.; Doyle, J. M.; Gabrielse, G.; Gurevich, Y. V.; Hess, P. W.; Hutzler, N. R.; Kirilov, E.; Kozyryev, I.; O'Leary, B. R.; Panda, C. D.; Parsons, M. F.; Spaun, B.; Vutha, A. C.; West, A. D.; West, E. P.; ACME Collaboration

    2017-07-01

    We recently set a new limit on the electric dipole moment of the electron (eEDM) (J Baron et al and ACME collaboration 2014 Science 343 269-272), which represented an order-of-magnitude improvement on the previous limit and placed more stringent constraints on many charge-parity-violating extensions to the standard model. In this paper we discuss the measurement in detail. The experimental method and associated apparatus are described, together with the techniques used to isolate the eEDM signal. In particular, we detail the way experimental switches were used to suppress effects that can mimic the signal of interest. The methods used to search for systematic errors, and models explaining observed systematic errors, are also described. We briefly discuss possible improvements to the experiment.

  11. Progress on an implementation of MIFlowCyt in XML

    NASA Astrophysics Data System (ADS)

    Leif, Robert C.; Leif, Stephanie H.

    2015-03-01

    Introduction: The International Society for Advancement of Cytometry (ISAC) Data Standards Task Force (DSTF) has created a standard for the Minimum Information about a Flow Cytometry Experiment (MIFlowCyt 1.0). The CytometryML schemas, are based in part upon the Flow Cytometry Standard and Digital Imaging and Communication (DICOM) standards. CytometryML has and will be extended and adapted to include MIFlowCyt, as well as to serve as a common standard for flow and image cytometry (digital microscopy). Methods: The MIFlowCyt data-types were created, as is the rest of CytometryML, in the XML Schema Definition Language (XSD1.1). Individual major elements of the MIFlowCyt schema were translated into XML and filled with reasonable data. A small section of the code was formatted with HTML formatting elements. Results: The differences in the amount of detail to be recorded for 1) users of standard techniques including data analysts and 2) others, such as method and device creators, laboratory and other managers, engineers, and regulatory specialists required that separate data-types be created to describe the instrument configuration and components. A very substantial part of the MIFlowCyt element that describes the Experimental Overview part of the MIFlowCyt and substantial parts of several other major elements have been developed. Conclusions: The future use of structured XML tags and web technology should facilitate searching of experimental information, its presentation, and inclusion in structured research, clinical, and regulatory documents, as well as demonstrate in publications adherence to the MIFlowCyt standard. The use of CytometryML together with XML technology should also result in the textual and numeric data being published using web technology without any change in composition. Preliminary testing indicates that CytometryML XML pages can be directly formatted with the combination of HTML and CSS.

  12. [Triple-type theory of statistics and its application in the scientific research of biomedicine].

    PubMed

    Hu, Liang-ping; Liu, Hui-gang

    2005-07-20

    To point out the crux of why so many people failed to grasp statistics and to bring forth a "triple-type theory of statistics" to solve the problem in a creative way. Based on the experience in long-time teaching and research in statistics, the "three-type theory" was raised and clarified. Examples were provided to demonstrate that the 3 types, i.e., expressive type, prototype and the standardized type are the essentials for people to apply statistics rationally both in theory and practice, and moreover, it is demonstrated by some instances that the "three types" are correlated with each other. It can help people to see the essence by interpreting and analyzing the problems of experimental designs and statistical analyses in medical research work. Investigations reveal that for some questions, the three types are mutually identical; for some questions, the prototype is their standardized type; however, for some others, the three types are distinct from each other. It has been shown that in some multifactor experimental researches, it leads to the nonexistence of the standardized type corresponding to the prototype at all, because some researchers have committed the mistake of "incomplete control" in setting experimental groups. This is a problem which should be solved by the concept and method of "division". Once the "triple-type" for each question is clarified, a proper experimental design and statistical method can be carried out easily. "Triple-type theory of statistics" can help people to avoid committing statistical mistakes or at least to decrease the misuse rate dramatically and improve the quality, level and speed of biomedical research during the process of applying statistics. It can also help people to improve the quality of statistical textbooks and the teaching effect of statistics and it has demonstrated how to advance biomedical statistics.

  13. An agenda for assessing and improving conservation impacts of sustainability standards in tropical agriculture.

    PubMed

    Milder, Jeffrey C; Arbuthnot, Margaret; Blackman, Allen; Brooks, Sharon E; Giovannucci, Daniele; Gross, Lee; Kennedy, Elizabeth T; Komives, Kristin; Lambin, Eric F; Lee, Audrey; Meyer, Daniel; Newton, Peter; Phalan, Ben; Schroth, Götz; Semroc, Bambi; Van Rikxoort, Henk; Zrust, Michal

    2015-04-01

    Sustainability standards and certification serve to differentiate and provide market recognition to goods produced in accordance with social and environmental good practices, typically including practices to protect biodiversity. Such standards have seen rapid growth, including in tropical agricultural commodities such as cocoa, coffee, palm oil, soybeans, and tea. Given the role of sustainability standards in influencing land use in hotspots of biodiversity, deforestation, and agricultural intensification, much could be gained from efforts to evaluate and increase the conservation payoff of these schemes. To this end, we devised a systematic approach for monitoring and evaluating the conservation impacts of agricultural sustainability standards and for using the resulting evidence to improve the effectiveness of such standards over time. The approach is oriented around a set of hypotheses and corresponding research questions about how sustainability standards are predicted to deliver conservation benefits. These questions are addressed through data from multiple sources, including basic common information from certification audits; field monitoring of environmental outcomes at a sample of certified sites; and rigorous impact assessment research based on experimental or quasi-experimental methods. Integration of these sources can generate time-series data that are comparable across sites and regions and provide detailed portraits of the effects of sustainability standards. To implement this approach, we propose new collaborations between the conservation research community and the sustainability standards community to develop common indicators and monitoring protocols, foster data sharing and synthesis, and link research and practice more effectively. As the role of sustainability standards in tropical land-use governance continues to evolve, robust evidence on the factors contributing to effectiveness can help to ensure that such standards are designed and implemented to maximize benefits for biodiversity conservation. © 2014 Society for Conservation Biology.

  14. Establishment of a New Zealand rabbit model of spinal tuberculosis.

    PubMed

    Geng, Guangqi; Wang, Qian; Shi, Jiandang; Yan, Junfa; Niu, Ningkui; Wang, Zili

    2015-04-01

    This was an experimental study. To investigate and evaluate the experimental method of establishing a New Zealand rabbit model of spinal tuberculosis. Establishing animal models of tuberculosis is critical to the experimental and clinical study of tuberculosis, especially spinal tuberculosis. However, the rapid spread of Mycobacterium tuberculosis and subsequent high mortality thwarted their effort. Since then, no animal models have been established of spinal tuberculosis. Forty-two New Zealand rabbits were randomly divided into experimental (n=20), control (n=20), and blank groups (n=2). Experimental animals were sensitized by complete Freund's adjuvant. A hole drilled under the upper endplate of the L4 vertebral body was filled with a gelfoam sponge infused with 0.1 mL H37Rv standard M. tuberculosis suspension (in controls, culture medium, and saline). Blank animals received no treatment. Survival 8 weeks after surgery was 89.5%, 94.7%, and 100% in experimental, control, and blank groups, respectively. The model was successfully established in all surviving experimental rabbits. In experimental animals, vertebral body destruction at 4 weeks was 50% by x-ray; 83.3% by computed tomography reconstruction and magnetic resonance imaging; at 8 weeks, 58.8% by x-ray and 100% by computed tomograph reconstruction and magnetic resonance imaging. At 8 weeks, experimental animals developed vertebral destruction, granulation, and necrosis and 17.6% had psoas abscess. Histopathology revealed numerous lymphocytes and epithelioid cells, trabecular bone fracture, and coagulative necrosis in the vertebrae of experimental animals; bacterium culture was 52.9% positive. Control and blank animals showed no such changes. A New Zealand rabbit of spinal tuberculosis model can be successfully established by drilling a hole in the upper endplate of the vertebral body, filling with gelfoam sponge infused with H37Rv standard M. tuberculosis suspension after sensitization by complete Freund's adjuvant.

  15. In-Vivo Assessment of Femoral Bone Strength Using Finite Element Analysis (FEA) Based on Routine MDCT Imaging: A Preliminary Study on Patients with Vertebral Fractures

    PubMed Central

    Liebl, Hans; Garcia, Eduardo Grande; Holzner, Fabian; Noel, Peter B.; Burgkart, Rainer; Rummeny, Ernst J.; Baum, Thomas; Bauer, Jan S.

    2015-01-01

    Purpose To experimentally validate a non-linear finite element analysis (FEA) modeling approach assessing in-vitro fracture risk at the proximal femur and to transfer the method to standard in-vivo multi-detector computed tomography (MDCT) data of the hip aiming to predict additional hip fracture risk in subjects with and without osteoporosis associated vertebral fractures using bone mineral density (BMD) measurements as gold standard. Methods One fresh-frozen human femur specimen was mechanically tested and fractured simulating stance and clinically relevant fall loading configurations to the hip. After experimental in-vitro validation, the FEA simulation protocol was transferred to standard contrast-enhanced in-vivo MDCT images to calculate individual hip fracture risk each for 4 subjects with and without a history of osteoporotic vertebral fractures matched by age and gender. In addition, FEA based risk factor calculations were compared to manual femoral BMD measurements of all subjects. Results In-vitro simulations showed good correlation with the experimentally measured strains both in stance (R2 = 0.963) and fall configuration (R2 = 0.976). The simulated maximum stress overestimated the experimental failure load (4743 N) by 14.7% (5440 N) while the simulated maximum strain overestimated by 4.7% (4968 N). The simulated failed elements coincided precisely with the experimentally determined fracture locations. BMD measurements in subjects with a history of osteoporotic vertebral fractures did not differ significantly from subjects without fragility fractures (femoral head: p = 0.989; femoral neck: p = 0.366), but showed higher FEA based risk factors for additional incident hip fractures (p = 0.028). Conclusion FEA simulations were successfully validated by elastic and destructive in-vitro experiments. In the subsequent in-vivo analyses, MDCT based FEA based risk factor differences for additional hip fractures were not mirrored by according BMD measurements. Our data suggests, that MDCT derived FEA models may assess bone strength more accurately than BMD measurements alone, providing a valuable in-vivo fracture risk assessment tool. PMID:25723187

  16. Classifying BCI signals from novice users with extreme learning machine

    NASA Astrophysics Data System (ADS)

    Rodríguez-Bermúdez, Germán; Bueno-Crespo, Andrés; José Martinez-Albaladejo, F.

    2017-07-01

    Brain computer interface (BCI) allows to control external devices only with the electrical activity of the brain. In order to improve the system, several approaches have been proposed. However it is usual to test algorithms with standard BCI signals from experts users or from repositories available on Internet. In this work, extreme learning machine (ELM) has been tested with signals from 5 novel users to compare with standard classification algorithms. Experimental results show that ELM is a suitable method to classify electroencephalogram signals from novice users.

  17. Lattice field theory applications in high energy physics

    NASA Astrophysics Data System (ADS)

    Gottlieb, Steven

    2016-10-01

    Lattice gauge theory was formulated by Kenneth Wilson in 1974. In the ensuing decades, improvements in actions, algorithms, and computers have enabled tremendous progress in QCD, to the point where lattice calculations can yield sub-percent level precision for some quantities. Beyond QCD, lattice methods are being used to explore possible beyond the standard model (BSM) theories of dynamical symmetry breaking and supersymmetry. We survey progress in extracting information about the parameters of the standard model by confronting lattice calculations with experimental results and searching for evidence of BSM effects.

  18. Correction factors for the NMi free-air ionization chamber for medium-energy x-rays calculated with the Monte Carlo method.

    PubMed

    Grimbergen, T W; van Dijk, E; de Vries, W

    1998-11-01

    A new method is described for the determination of x-ray quality dependent correction factors for free-air ionization chambers. The method is based on weighting correction factors for mono-energetic photons, which are calculated using the Monte Carlo method, with measured air kerma spectra. With this method, correction factors for electron loss, scatter inside the chamber and transmission through the diaphragm and front wall have been calculated for the NMi free-air chamber for medium-energy x-rays for a wide range of x-ray qualities in use at NMi. The newly obtained correction factors were compared with the values in use at present, which are based on interpolation of experimental data for a specific set of x-ray qualities. For x-ray qualities which are similar to this specific set, the agreement between the correction factors determined with the new method and those based on the experimental data is better than 0.1%, except for heavily filtered x-rays generated at 250 kV. For x-ray qualities dissimilar to the specific set, differences up to 0.4% exist, which can be explained by uncertainties in the interpolation procedure of the experimental data. Since the new method does not depend on experimental data for a specific set of x-ray qualities, the new method allows for a more flexible use of the free-air chamber as a primary standard for air kerma for any x-ray quality in the medium-energy x-ray range.

  19. An extended set of yeast-based functional assays accurately identifies human disease mutations

    PubMed Central

    Sun, Song; Yang, Fan; Tan, Guihong; Costanzo, Michael; Oughtred, Rose; Hirschman, Jodi; Theesfeld, Chandra L.; Bansal, Pritpal; Sahni, Nidhi; Yi, Song; Yu, Analyn; Tyagi, Tanya; Tie, Cathy; Hill, David E.; Vidal, Marc; Andrews, Brenda J.; Boone, Charles; Dolinski, Kara; Roth, Frederick P.

    2016-01-01

    We can now routinely identify coding variants within individual human genomes. A pressing challenge is to determine which variants disrupt the function of disease-associated genes. Both experimental and computational methods exist to predict pathogenicity of human genetic variation. However, a systematic performance comparison between them has been lacking. Therefore, we developed and exploited a panel of 26 yeast-based functional complementation assays to measure the impact of 179 variants (101 disease- and 78 non-disease-associated variants) from 22 human disease genes. Using the resulting reference standard, we show that experimental functional assays in a 1-billion-year diverged model organism can identify pathogenic alleles with significantly higher precision and specificity than current computational methods. PMID:26975778

  20. Coupling of free space sub-terahertz waves into dielectric slabs using PC waveguides.

    PubMed

    Ghattan, Z; Hasek, T; Shahabadi, M; Koch, M

    2008-04-28

    The paper presents theoretical and experimental results on photonic crystal structures which work under the self-collimation condition to couple free space waves into dielectric slabs in the sub-terahertz range. Using a standard machining process, two-dimensional photonic crystal structures consisting of a square array of air holes in the dielectric medium are fabricated. One of the structures has two adjacent parallel line-defects that improve the coupling efficiency. This leads to a combination of self-collimation and directional emission of electromagnetic waves. The experimental results are in good agreement with those of the Finite- Element-Method calculations. Experimentally we achieve a coupling efficiency of 63%.

  1. Extended Glauert tip correction to include vortex rollup effects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maniaci, David; Schmitz, Sven

    Wind turbine loads predictions by blade-element momentum theory using the standard tip-loss correction have been shown to over-predict loading near the blade tip in comparison to experimental data. This over-prediction is theorized to be due to the assumption of light rotor loading, inherent in the standard tip-loss correction model of Glauert. A higher- order free-wake method, WindDVE, is used to compute the rollup process of the trailing vortex sheets downstream of wind turbine blades. Results obtained serve an exact correction function to the Glauert tip correction used in blade-element momentum methods. Lastly, it is found that accounting for the effectsmore » of tip vortex rollup within the Glauert tip correction indeed results in improved prediction of blade tip loads computed by blade-element momentum methods.« less

  2. Extended Glauert tip correction to include vortex rollup effects

    DOE PAGES

    Maniaci, David; Schmitz, Sven

    2016-10-03

    Wind turbine loads predictions by blade-element momentum theory using the standard tip-loss correction have been shown to over-predict loading near the blade tip in comparison to experimental data. This over-prediction is theorized to be due to the assumption of light rotor loading, inherent in the standard tip-loss correction model of Glauert. A higher- order free-wake method, WindDVE, is used to compute the rollup process of the trailing vortex sheets downstream of wind turbine blades. Results obtained serve an exact correction function to the Glauert tip correction used in blade-element momentum methods. Lastly, it is found that accounting for the effectsmore » of tip vortex rollup within the Glauert tip correction indeed results in improved prediction of blade tip loads computed by blade-element momentum methods.« less

  3. Comparison of bursting pressure results of LPG tank using experimental and finite element method.

    PubMed

    Aksoley, M Egemen; Ozcelik, Babur; Bican, Ismail

    2008-03-01

    In this study, the resistance of liquefied-petroleum gas (LPG) tanks produced from carbon steel sheet metal of different thicknesses has been investigated by bursting pressure experiments and non-linear Finite Element Method (FEM) method by increasing internal pressure values. The designs of LPG tanks produced from sheet metal to be used at the study have been realized by analytical calculations made taking into consideration of related standards. Bursting pressure tests have been performed that were inclined to decreasing the sheet thickness of LPG tanks used in industry. It has been shown that the LPG tanks can be produced in compliance with the standards when the sheet thickness is lowered from 3 to 2.8mm. The FEM results have displayed close values with the bursting results obtained from the experiments.

  4. A novel computer-aided method to fabricate a custom one-piece glass fiber dowel-and-core based on digitized impression and crown preparation data.

    PubMed

    Chen, Zhiyu; Li, Ya; Deng, Xuliang; Wang, Xinzhi

    2014-06-01

    Fiber-reinforced composite dowels have been widely used for their superior biomechanical properties; however, their preformed shape cannot fit irregularly shaped root canals. This study aimed to describe a novel computer-aided method to create a custom-made one-piece dowel-and-core based on the digitization of impressions and clinical standard crown preparations. A standard maxillary die stone model containing three prepared teeth each (maxillary lateral incisor, canine, premolar) requiring dowel restorations was made. It was then mounted on an average value articulator with the mandibular stone model to simulate natural occlusion. Impressions for each tooth were obtained using vinylpolysiloxane with a sectional dual-arch tray and digitized with an optical scanner. The dowel-and-core virtual model was created by slicing 3D dowel data from impression digitization with core data selected from a standard crown preparation database of 107 records collected from clinics and digitized. The position of the chosen digital core was manually regulated to coordinate with the adjacent teeth to fulfill the crown restorative requirements. Based on virtual models, one-piece custom dowel-and-cores for three experimental teeth were milled from a glass fiber block with computer-aided manufacturing techniques. Furthermore, two patients were treated to evaluate the practicality of this new method. The one-piece glass fiber dowel-and-core made for experimental teeth fulfilled the clinical requirements for dowel restorations. Moreover, two patients were treated to validate the technique. This novel computer-aided method to create a custom one-piece glass fiber dowel-and-core proved to be practical and efficient. © 2013 by the American College of Prosthodontists.

  5. Analysis of statistical and standard algorithms for detecting muscle onset with surface electromyography

    PubMed Central

    Tweedell, Andrew J.; Haynes, Courtney A.

    2017-01-01

    The timing of muscle activity is a commonly applied analytic method to understand how the nervous system controls movement. This study systematically evaluates six classes of standard and statistical algorithms to determine muscle onset in both experimental surface electromyography (EMG) and simulated EMG with a known onset time. Eighteen participants had EMG collected from the biceps brachii and vastus lateralis while performing a biceps curl or knee extension, respectively. Three established methods and three statistical methods for EMG onset were evaluated. Linear envelope, Teager-Kaiser energy operator + linear envelope and sample entropy were the established methods evaluated while general time series mean/variance, sequential and batch processing of parametric and nonparametric tools, and Bayesian changepoint analysis were the statistical techniques used. Visual EMG onset (experimental data) and objective EMG onset (simulated data) were compared with algorithmic EMG onset via root mean square error and linear regression models for stepwise elimination of inferior algorithms. The top algorithms for both data types were analyzed for their mean agreement with the gold standard onset and evaluation of 95% confidence intervals. The top algorithms were all Bayesian changepoint analysis iterations where the parameter of the prior (p0) was zero. The best performing Bayesian algorithms were p0 = 0 and a posterior probability for onset determination at 60–90%. While existing algorithms performed reasonably, the Bayesian changepoint analysis methodology provides greater reliability and accuracy when determining the singular onset of EMG activity in a time series. Further research is needed to determine if this class of algorithms perform equally well when the time series has multiple bursts of muscle activity. PMID:28489897

  6. A novel second-order standard addition analytical method based on data processing with multidimensional partial least-squares and residual bilinearization.

    PubMed

    Lozano, Valeria A; Ibañez, Gabriela A; Olivieri, Alejandro C

    2009-10-05

    In the presence of analyte-background interactions and a significant background signal, both second-order multivariate calibration and standard addition are required for successful analyte quantitation achieving the second-order advantage. This report discusses a modified second-order standard addition method, in which the test data matrix is subtracted from the standard addition matrices, and quantitation proceeds via the classical external calibration procedure. It is shown that this novel data processing method allows one to apply not only parallel factor analysis (PARAFAC) and multivariate curve resolution-alternating least-squares (MCR-ALS), but also the recently introduced and more flexible partial least-squares (PLS) models coupled to residual bilinearization (RBL). In particular, the multidimensional variant N-PLS/RBL is shown to produce the best analytical results. The comparison is carried out with the aid of a set of simulated data, as well as two experimental data sets: one aimed at the determination of salicylate in human serum in the presence of naproxen as an additional interferent, and the second one devoted to the analysis of danofloxacin in human serum in the presence of salicylate.

  7. Prediction of Phase Behavior of Spray-Dried Amorphous Solid Dispersions: Assessment of Thermodynamic Models, Standard Screening Methods and a Novel Atomization Screening Device with Regard to Prediction Accuracy

    PubMed Central

    Chavez, Pierre-François; Meeus, Joke; Robin, Florent; Schubert, Martin Alexander; Somville, Pascal

    2018-01-01

    The evaluation of drug–polymer miscibility in the early phase of drug development is essential to ensure successful amorphous solid dispersion (ASD) manufacturing. This work investigates the comparison of thermodynamic models, conventional experimental screening methods (solvent casting, quench cooling), and a novel atomization screening device based on their ability to predict drug–polymer miscibility, solid state properties (Tg value and width), and adequate polymer selection during the development of spray-dried amorphous solid dispersions (SDASDs). Binary ASDs of four drugs and seven polymers were produced at 20:80, 40:60, 60:40, and 80:20 (w/w). Samples were systematically analyzed using modulated differential scanning calorimetry (mDSC) and X-ray powder diffraction (XRPD). Principal component analysis (PCA) was used to qualitatively assess the predictability of screening methods with regards to SDASD development. Poor correlation was found between theoretical models and experimentally-obtained results. Additionally, the limited ability of usual screening methods to predict the miscibility of SDASDs did not guarantee the appropriate selection of lead excipient for the manufacturing of robust SDASDs. Contrary to standard approaches, our novel screening device allowed the selection of optimal polymer and drug loading and established insight into the final properties and performance of SDASDs at an early stage, therefore enabling the optimization of the scaled-up late-stage development. PMID:29518936

  8. [Standardization of production of process Notopterygii Rhizoma et Radix slices].

    PubMed

    Sun, Zhen-Yang; Wang, Ying-Zi; Nie, Rui-Jie; Zhang, Jing-Zhen; Wang, Si-Yu

    2017-12-01

    Notopterol, isoimperatorin, volatile oil and extract (water and ethanol) were used as the research objects in this study to investigate the effects of different softening method, slice thickness and drying methods on the quality of Notopterygii Rhizoma et Radix slices, and the experimental data were analyzed by homogeneous distance evaluation method. The results showed that different softening, cutting and drying processes could affect the content of five components in Notopterygii Rhizoma et Radix incisum. The best processing technology of Notopterygii Rhizoma et Radix slices was as follows: non-medicinal parts were removed; mildewed and rot as well as moth-eaten parts were removed; washed by the flowing drinking water; stacked in the drug pool; moistening method was used for softening, where 1/8 volume of water was sprayed for every 1 kg of herbs every 2 h; upper part of herbs covered with clean and moist cotton, and cut into thick slices (2-4 mm) after 12 h moistening until appropriate softness, then received blast drying for 4 h at 50 ℃, and turned over for 2 times during the drying. The process is practical and provides the experimental basis for the standardization of the processing of Notopterygii Rhizoma et Radix, with great significance to improve the quality of Notopterygii Rhizoma et Radix slices. Copyright© by the Chinese Pharmaceutical Association.

  9. Earthquake Building Damage Mapping Based on Feature Analyzing Method from Synthetic Aperture Radar Data

    NASA Astrophysics Data System (ADS)

    An, L.; Zhang, J.; Gong, L.

    2018-04-01

    Playing an important role in gathering information of social infrastructure damage, Synthetic Aperture Radar (SAR) remote sensing is a useful tool for monitoring earthquake disasters. With the wide application of this technique, a standard method, comparing post-seismic to pre-seismic data, become common. However, multi-temporal SAR processes, are not always achievable. To develop a post-seismic data only method for building damage detection, is of great importance. In this paper, the authors are now initiating experimental investigation to establish an object-based feature analysing classification method for building damage recognition.

  10. Experimental rill erosion research vs. model concepts - quantification of the hydraulic and erosional efficiency of rills

    NASA Astrophysics Data System (ADS)

    Wirtz, Stefan

    2014-05-01

    In soil erosion research, rills are believed to be one of the most efficient forms. They act as preferential flow paths for overland flow and hence become the most efficient sediment sources in a catchment. However their fraction of the overall detachment in a certain area compared to other soil erosion processes is contentious. The requirement for handling this subject is the standardization of the used measurement methods for rill erosion quantification. Only by using a standardized method, the results of different studies become comparable and can be synthesized to one overall statement. In rill erosion research, such a standardized field method was missing until now. Hence, the first aim of this study is to present an experimental setup that enables us to obtain comparable data about process dynamics in eroding rills under standardized conditions in the field. Using this rill experiment, the runoff efficiency of rills (second aim) and the fraction of rill erosion on total soil loss (third aim) in a catchment are quantified. The erosion rate [g m-2] in the rills is between twenty- and sixty-times higher compared to the interrill areas, the specific discharge [L s-1 m-2] in the rills is about 2000 times higher. The identification and quantification of different rill erosion processes are the fourth aim within this project. Gravitative processes like side wall failure, headcut- and knickpoint retreat provide up to 94 % of the detached sediment quantity. In soil erosion models, only the incision into the rill's bottom is considered, hence the modelled results are unsatisfactorily. Due to the low quality of soil erosion model results, the fifth aim of the study is to review two physical basic assumptions using the rill experiments. Contrasting with the model assumptions, there is no clear linear correlation between any hydraulic parameter and the detachment rate and the transport rate is capable of exceeding the transport capacity. In conclusion, the results clearly show the need of experimental field data obtained under conditions as close as possible to reality. This is the only way to improve the fundamental knowledge about the function and the impact of the different processes in rill erosion. A better understanding of the process combinations is a fundamental request for developing a really functioning soil erosion model. In such a model, spatial and temporal variability as well as the combination of different sub-processes must be considered. Regarding the experimental results of this study, the simulation of natural processes using simple, static mathematical equations seems not to be possible.

  11. Ultrasound biomicroscopy (UBM) and scanning acoustic microscopy (SAM) for the assessment of hernia mesh integration: a comparison to standard histology in an experimental model.

    PubMed

    Petter-Puchner, A; Gruber-Blum, S; Walder, N; Fortelny, R H; Redl, H; Raum, K

    2014-08-01

    Mesh integration is a key parameter for reliable and safe hernia repair. So far, its assessment is based on histology obtained from rare second-look operations or experimental research. Therefore, non-invasive high-resolution imaging techniques would be of great value. Ultrasound biomicroscopy (UBM) and scanning acoustic microscopy (SAM) have shown potential in the imaging of hard and soft tissues. This experimental study compared the detection of mesh integration, foreign body reaction and scar formation in UBM/SAM with standard histology. Ten titanized polypropylene meshes were implanted in rats in a model of onlay repair. 17 days postoperative animals were killed and samples were paraffin embedded for histology (H&E, Cresyl violet) or processed for postmortem UBM/SAM. The observation period was uneventful and meshes appeared well integrated. Relocation of neighboring cross-sectional levels could easily be achieved with the 40-MHz UBM and granulation tissue could be distinguished from adjacent muscle tissue layers. The spatial resolution of approximately 8 μm of the 200-MHz UBM system images was comparable to standard histology (2.5-5× magnification) and allowed a clear identification of mesh fibers and different tissue types, e.g., scar, fat, granulation, and muscle tissues, as well as vessels, abscedations, and foreign body giant cell clusters. This pilot study demonstrates the potential of high-frequency ultrasound to assess hernia mesh integration non-invasively. Although the methods lack cell-specific information, tissue integration could reliably be assessed. The possibility of conducting UBM in vivo advocates this method as a guidance tool for the indication of second-look operations and subsequent elaborate histological analyses.

  12. Exciting (the) Vacuum: Possible Manifestations of the Higgs particle at the LHC

    ScienceCinema

    David Kaplan

    2017-12-09

    The Higgs boson is the particle most anticipated at the LHC. However, there is currently no leading theory of electroweak symmetry breaking (and the 'Higgs mechanism'). The many possibilities suggest many ways the Higgs could appear in the detectors, some of which require non-standard search methods. I will review the current state of beyond the standard model physics and the implication for Higgs physics. I then discuss some non-standard Higgs decays and suggest (perhaps naive) new experimental strategies for detecting the Higgs in such cases. In some models, while part of the new physics at the weak scale would be visible, the Higgs would be nearly impossible to detect.

  13. Dot enzyme-linked immunosorbent assay (ELISA) for the detection of Toxocara infection using a rat model.

    PubMed

    Paller, Vachel Gay V; Besana, Cyrelle M; Valdez, Isabel Kristine M

    2017-12-01

    Toxocariasis is a zoonotic disease usually caused by dog and cat roundworms, Toxocara canis and T. cati. Detection and diagnosis is difficult in paratenic and accidental hosts, including humans, as they cannot be detected through conventional methods such as fecal examination. Diagnosis therefore relies on immunological methods and molecular methods such as enzyme-linked immunosorbent assay (ELISA) and Western Blot, which are both time-consuming and requires sophisticated equipment. In the Philippines, only a few studies are available on Toxocara seroprevalence. Therefore, there is a need to adapt methods for serodiagnosis of Toxocara infection in humans for the Philippine setting. A dot enzyme linked immunosorbent assay (dot-ELISA) was standardized using T. canis excretory-secretory antigens. Test sera were collected from laboratory rats (Sprague-Dawley strain) experimentally infected with embryonated eggs of T. canis and Ascaris suum as well as rice field rats naturally infected with Taenia taeniaeformis and Nippostrongylus sp. Optimum conditions used were 20 µg/ml antigen concentration and 1:10 serum dilution. The sensitivity, specificity, positive, and negative predictive values were 90% (95% CI 55.5-99.7%), 100% (95% CI 69.2-100.0%), 100% (95% CI 66.4-100%), and 90.9% (95% CI 58.7-99.8%), respectively. Dot-ELISA has the potential to be developed as a cheaper, simpler, and more practical method for detection of anti- Toxocara antibodies on accidental hosts. This is a preliminary study conducted on experimental animals before optimization and standardization for human serum samples.

  14. High-resolution frequency measurement method with a wide-frequency range based on a quantized phase step law.

    PubMed

    Du, Baoqiang; Dong, Shaofeng; Wang, Yanfeng; Guo, Shuting; Cao, Lingzhi; Zhou, Wei; Zuo, Yandi; Liu, Dan

    2013-11-01

    A wide-frequency and high-resolution frequency measurement method based on the quantized phase step law is presented in this paper. Utilizing a variation law of the phase differences, the direct different frequency phase processing, and the phase group synchronization phenomenon, combining an A/D converter and the adaptive phase shifting principle, a counter gate is established in the phase coincidences at one-group intervals, which eliminates the ±1 counter error in the traditional frequency measurement method. More importantly, the direct phase comparison, the measurement, and the control between any periodic signals have been realized without frequency normalization in this method. Experimental results show that sub-picosecond resolution can be easily obtained in the frequency measurement, the frequency standard comparison, and the phase-locked control based on the phase quantization processing technique. The method may be widely used in navigation positioning, space techniques, communication, radar, astronomy, atomic frequency standards, and other high-tech fields.

  15. Thermal Property Measurement of Semiconductor Melt using Modified Laser Flash Method

    NASA Technical Reports Server (NTRS)

    Lin, Bochuan; Zhu, Shen; Ban, Heng; Li, Chao; Scripa, Rosalla N.; Su, Ching-Hua; Lehoczky, Sandor L.

    2003-01-01

    This study further developed standard laser flash method to measure multiple thermal properties of semiconductor melts. The modified method can determine thermal diffusivity, thermal conductivity, and specific heat capacity of the melt simultaneously. The transient heat transfer process in the melt and its quartz container was numerically studied in detail. A fitting procedure based on numerical simulation results and the least root-mean-square error fitting to the experimental data was used to extract the values of specific heat capacity, thermal conductivity and thermal diffusivity. This modified method is a step forward from the standard laser flash method, which is usually used to measure thermal diffusivity of solids. The result for tellurium (Te) at 873 K: specific heat capacity 300.2 Joules per kilogram K, thermal conductivity 3.50 Watts per meter K, thermal diffusivity 2.04 x 10(exp -6) square meters per second, are within the range reported in literature. The uncertainty analysis showed the quantitative effect of sample geometry, transient temperature measured, and the energy of the laser pulse.

  16. Comparison of Hi-C results using in-solution versus in-nucleus ligation.

    PubMed

    Nagano, Takashi; Várnai, Csilla; Schoenfelder, Stefan; Javierre, Biola-Maria; Wingett, Steven W; Fraser, Peter

    2015-08-26

    Chromosome conformation capture and various derivative methods such as 4C, 5C and Hi-C have emerged as standard tools to analyze the three-dimensional organization of the genome in the nucleus. These methods employ ligation of diluted cross-linked chromatin complexes, intended to favor proximity-dependent, intra-complex ligation. During development of single-cell Hi-C, we devised an alternative Hi-C protocol with ligation in preserved nuclei rather than in solution. Here we directly compare Hi-C methods employing in-nucleus ligation with the standard in-solution ligation. We show in-nucleus ligation results in consistently lower levels of inter-chromosomal contacts. Through chromatin mixing experiments we show that a significantly large fraction of inter-chromosomal contacts are the result of spurious ligation events formed during in-solution ligation. In-nucleus ligation significantly reduces this source of experimental noise, and results in improved reproducibility between replicates. We also find that in-nucleus ligation eliminates restriction fragment length bias found with in-solution ligation. These improvements result in greater reproducibility of long-range intra-chromosomal and inter-chromosomal contacts, as well as enhanced detection of structural features such as topologically associated domain boundaries. We conclude that in-nucleus ligation captures chromatin interactions more consistently over a wider range of distances, and significantly reduces both experimental noise and bias. In-nucleus ligation creates higher quality Hi-C libraries while simplifying the experimental procedure. We suggest that the entire range of 3C applications are likely to show similar benefits from in-nucleus ligation.

  17. Assessing the Sensitivity of Treatment Effect Estimates to Differential Follow-Up Rates: Implications for Translational Research

    PubMed Central

    McCaffrey, Daniel; Ramchand, Rajeev; Hunter, Sarah B.; Suttorp, Marika

    2012-01-01

    We develop a new tool for assessing the sensitivity of findings on treatment effectiveness to differential follow-up rates in the two treatment conditions being compared. The method censors the group with the higher response rate to create a synthetic respondent group that is then compared with the observed cases in the other condition to estimate a treatment effect. Censoring is done under various assumptions about the strength of the relationship between follow-up and outcomes to determine how informative differential dropout can alter inferences relative to estimates from models that assume the data are missing at random. The method provides an intuitive measure for understanding the strength of the association between outcomes and dropout that would be required to alter inferences about treatment effects. Our approach is motivated by translational research in which treatments found to be effective under experimental conditions are tested in standard treatment conditions. In such applications, follow-up rates in the experimental setting are likely to be substantially higher than in the standard setting, especially when observational data are used in the evaluation. We test the method on a case study evaluation of the effectiveness of an evidence-supported adolescent substance abuse treatment program (Motivational Enhancement Therapy/Cognitive Behavioral Therapy-5 [MET/CBT-5]) delivered by community-based treatment providers relative to its performance in a controlled research trial. In this case study, follow-up rates in the community based settings were extremely low (54%) compared to the experimental setting (95%) giving raise to concerns about non-ignorable drop-out. PMID:22956890

  18. Evaluation of experimental methods for assessing safety for ultrasound radiation force elastography.

    PubMed

    Skurczynski, M J; Duck, F A; Shipley, J A; Bamber, J C; Melodelima, D

    2009-08-01

    Standard test tools have been evaluated for the assessment of safety associated with a prototype transducer intended for a novel radiation force elastographic imaging system. In particular, safety has been evaluated by direct measurement of temperature rise, using a standard thermal test object, and detection of inertial cavitation from acoustic emission. These direct measurements have been compared with values of the thermal index and mechanical index, calculated from acoustic measurements in water using standard formulae. It is concluded that measurements using a thermal test object can be an effective alternative to the calculation of thermal index for evaluating thermal hazard. Measurement of the threshold for cavitation was subject to considerable variability, and it is concluded that the mechanical index still remains the preferred standard means for assessing cavitation hazard.

  19. Survival of Salmonella Copenhagen in food bowls following contamination with experimentally inoculated raw meat: Effects of time, cleaning, and disinfection

    PubMed Central

    Weese, J Scott; Rousseau, J.

    2006-01-01

    There are concerns regarding the safety of feeding raw meat to household pets. This study demonstrated that Salmonella persists in food bowls that are inoculated with Salmonella-containing raw meat. Standard methods of cleaning and disinfection were minimally effective at eliminating Salmonella contamination. PMID:17017654

  20. Mutation-Based Learning to Improve Student Autonomy and Scientific Inquiry Skills in a Large Genetics Laboratory Course

    ERIC Educational Resources Information Center

    Wu, Jinlu

    2013-01-01

    Laboratory education can play a vital role in developing a learner's autonomy and scientific inquiry skills. In an innovative, mutation-based learning (MBL) approach, students were instructed to redesign a teacher-designed standard experimental protocol by a "mutation" method in a molecular genetics laboratory course. Students could…

  1. The Adequacy of Different Robust Statistical Tests in Comparing Two Independent Groups

    ERIC Educational Resources Information Center

    Pero-Cebollero, Maribel; Guardia-Olmos, Joan

    2013-01-01

    In the current study, we evaluated various robust statistical methods for comparing two independent groups. Two scenarios for simulation were generated: one of equality and another of population mean differences. In each of the scenarios, 33 experimental conditions were used as a function of sample size, standard deviation and asymmetry. For each…

  2. Development, optimization, validation and application of faster gas chromatography - flame ionization detector method for the analysis of total petroleum hydrocarbons in contaminated soils.

    PubMed

    Zubair, Abdulrazaq; Pappoe, Michael; James, Lesley A; Hawboldt, Kelly

    2015-12-18

    This paper presents an important new approach to improving the timeliness of Total Petroleum Hydrocarbon (TPH) analysis in the soil by Gas Chromatography - Flame Ionization Detector (GC-FID) using the CCME Canada-Wide Standard reference method. The Canada-Wide Standard (CWS) method is used for the analysis of petroleum hydrocarbon compounds across Canada. However, inter-laboratory application of this method for the analysis of TPH in the soil has often shown considerable variability in the results. This could be due, in part, to the different gas chromatography (GC) conditions, other steps involved in the method, as well as the soil properties. In addition, there are differences in the interpretation of the GC results, which impacts the determination of the effectiveness of remediation at hydrocarbon-contaminated sites. In this work, multivariate experimental design approach was used to develop and validate the analytical method for a faster quantitative analysis of TPH in (contaminated) soil. A fractional factorial design (fFD) was used to screen six factors to identify the most significant factors impacting the analysis. These factors included: injection volume (μL), injection temperature (°C), oven program (°C/min), detector temperature (°C), carrier gas flow rate (mL/min) and solvent ratio (v/v hexane/dichloromethane). The most important factors (carrier gas flow rate and oven program) were then optimized using a central composite response surface design. Robustness testing and validation of model compares favourably with the experimental results with percentage difference of 2.78% for the analysis time. This research successfully reduced the method's standard analytical time from 20 to 8min with all the carbon fractions eluting. The method was successfully applied for fast TPH analysis of Bunker C oil contaminated soil. A reduced analytical time would offer many benefits including an improved laboratory reporting times, and overall improved clean up efficiency. The method was successfully applied for the analysis of TPH of Bunker C oil in contaminated soil. Crown Copyright © 2015. Published by Elsevier B.V. All rights reserved.

  3. Indices of polarimetric purity for biological tissues inspection

    NASA Astrophysics Data System (ADS)

    Van Eeckhout, Albert; Lizana, Angel; Garcia-Caurel, Enric; Gil, José J.; Sansa, Adrià; Rodríguez, Carla; Estévez, Irene; González, Emilio; Escalera, Juan C.; Moreno, Ignacio; Campos, Juan

    2018-02-01

    We highlight the interest of using the Indices of Polarimetric Purity (IPPs) for the biological tissue inspection. These are three polarimetric metrics focused on the study of the depolarizing behaviour of the sample. The IPPs have been recently proposed in the literature and provide different and synthetized information than the commonly used depolarizing indices, as depolarization index (PΔ) or depolarization power (Δ). Compared with the standard polarimetric images of biological samples, IPPs enhance the contrast between different tissues of the sample and show differences between similar tissues which are not observed using the other standard techniques. Moreover, they present further physical information related to the depolarization mechanisms inherent to different tissues. In addition, the algorithm does not require advanced calculations (as in the case of polar decompositions), being the indices of polarimetric purity fast and easy to implement. We also propose a pseudo-coloured image method which encodes the sample information as a function of the different indices weights. These images allow us to customize the visualization of samples and to highlight certain of their constitutive structures. The interest and potential of the IPP approach are experimentally illustrated throughout the manuscript by comparing polarimetric images of different ex-vivo samples obtained with standard polarimetric methods with those obtained from the IPPs analysis. Enhanced contrast and retrieval of new information are experimentally obtained from the different IPP based images.

  4. Toward a standard in structural genome annotation for prokaryotes

    DOE PAGES

    Tripp, H. James; Sutton, Granger; White, Owen; ...

    2015-07-25

    In an effort to identify the best practice for finding genes in prokaryotic genomes and propose it as a standard for automated annotation pipelines, we collected 1,004,576 peptides from various publicly available resources, and these were used as a basis to evaluate various gene-calling methods. The peptides came from 45 bacterial replicons with an average GC content from 31 % to 74 %, biased toward higher GC content genomes. Automated, manual, and semi-manual methods were used to tally errors in three widely used gene calling methods, as evidenced by peptides mapped outside the boundaries of called genes. We found thatmore » the consensus set of identical genes predicted by the three methods constitutes only about 70 % of the genes predicted by each individual method (with start and stop required to coincide). Peptide data was useful for evaluating some of the differences between gene callers, but not reliable enough to make the results conclusive, due to limitations inherent in any proteogenomic study. A single, unambiguous, unanimous best practice did not emerge from this analysis, since the available proteomics data were not adequate to provide an objective measurement of differences in the accuracy between these methods. However, as a result of this study, software, reference data, and procedures have been better matched among participants, representing a step toward a much-needed standard. In the absence of sufficient amount of experimental data to achieve a universal standard, our recommendation is that any of these methods can be used by the community, as long as a single method is employed across all datasets to be compared.« less

  5. Estimation of beam material random field properties via sensitivity-based model updating using experimental frequency response functions

    NASA Astrophysics Data System (ADS)

    Machado, M. R.; Adhikari, S.; Dos Santos, J. M. C.; Arruda, J. R. F.

    2018-03-01

    Structural parameter estimation is affected not only by measurement noise but also by unknown uncertainties which are present in the system. Deterministic structural model updating methods minimise the difference between experimentally measured data and computational prediction. Sensitivity-based methods are very efficient in solving structural model updating problems. Material and geometrical parameters of the structure such as Poisson's ratio, Young's modulus, mass density, modal damping, etc. are usually considered deterministic and homogeneous. In this paper, the distributed and non-homogeneous characteristics of these parameters are considered in the model updating. The parameters are taken as spatially correlated random fields and are expanded in a spectral Karhunen-Loève (KL) decomposition. Using the KL expansion, the spectral dynamic stiffness matrix of the beam is expanded as a series in terms of discretized parameters, which can be estimated using sensitivity-based model updating techniques. Numerical and experimental tests involving a beam with distributed bending rigidity and mass density are used to verify the proposed method. This extension of standard model updating procedures can enhance the dynamic description of structural dynamic models.

  6. Accurate thermoelastic tensor and acoustic velocities of NaCl

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marcondes, Michel L., E-mail: michel@if.usp.br; Chemical Engineering and Material Science, University of Minnesota, Minneapolis, 55455; Shukla, Gaurav, E-mail: shukla@physics.umn.edu

    Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor bymore » using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.« less

  7. FIB preparation of a NiO Wedge-Lamella and STEM X-ray microanalysis for the determination of the experimental k(O-Ni) Cliff-Lorimer coefficient.

    PubMed

    Armigliato, Aldo; Frabboni, Stefano; Gazzadi, Gian Carlo; Rosa, Rodolfo

    2013-02-01

    A method for the fabrication of a wedge-shaped thin NiO lamella by focused ion beam is reported. The starting sample is an oxidized bulk single crystalline, <100> oriented, Ni commercial standard. The lamella is employed for the determination, by analytical electron microscopy at 200 kV of the experimental k(O-Ni) Cliff-Lorimer (G. Cliff & G.W. Lorimer, J Microsc 103, 203-207, 1975) coefficient, according to the extrapolation method by Van Cappellen (E. Van Cappellen, Microsc Microstruct Microanal 1, 1-22, 1990). The result thus obtained is compared to the theoretical k(O-Ni) values either implemented into the commercial software for X-ray microanalysis quantification of the scanning transmission electron microscopy/energy dispersive spectrometry equipment or calculated by the Monte Carlo method. Significant differences among the three values are found. This confirms that for a reliable quantification of binary alloys containing light elements, the choice of the Cliff-Lorimer coefficients is crucial and experimental values are recommended.

  8. A survey of methods for the evaluation of tissue engineering scaffold permeability.

    PubMed

    Pennella, F; Cerino, G; Massai, D; Gallo, D; Falvo D'Urso Labate, G; Schiavi, A; Deriu, M A; Audenino, A; Morbiducci, Umberto

    2013-10-01

    The performance of porous scaffolds for tissue engineering (TE) applications is evaluated, in general, in terms of porosity, pore size and distribution, and pore tortuosity. These descriptors are often confounding when they are applied to characterize transport phenomena within porous scaffolds. On the contrary, permeability is a more effective parameter in (1) estimating mass and species transport through the scaffold and (2) describing its topological features, thus allowing a better evaluation of the overall scaffold performance. However, the evaluation of TE scaffold permeability suffers of a lack of uniformity and standards in measurement and testing procedures which makes the comparison of results obtained in different laboratories unfeasible. In this review paper we summarize the most important features influencing TE scaffold permeability, linking them to the theoretical background. An overview of methods applied for TE scaffold permeability evaluation is given, presenting experimental test benches and computational methods applied (1) to integrate experimental measurements and (2) to support the TE scaffold design process. Both experimental and computational limitations in the permeability evaluation process are also discussed.

  9. Practical approach to subject-specific estimation of knee joint contact force.

    PubMed

    Knarr, Brian A; Higginson, Jill S

    2015-08-20

    Compressive forces experienced at the knee can significantly contribute to cartilage degeneration. Musculoskeletal models enable predictions of the internal forces experienced at the knee, but validation is often not possible, as experimental data detailing loading at the knee joint is limited. Recently available data reporting compressive knee force through direct measurement using instrumented total knee replacements offer a unique opportunity to evaluate the accuracy of models. Previous studies have highlighted the importance of subject-specificity in increasing the accuracy of model predictions; however, these techniques may be unrealistic outside of a research setting. Therefore, the goal of our work was to identify a practical approach for accurate prediction of tibiofemoral knee contact force (KCF). Four methods for prediction of knee contact force were compared: (1) standard static optimization, (2) uniform muscle coordination weighting, (3) subject-specific muscle coordination weighting and (4) subject-specific strength adjustments. Walking trials for three subjects with instrumented knee replacements were used to evaluate the accuracy of model predictions. Predictions utilizing subject-specific muscle coordination weighting yielded the best agreement with experimental data; however this method required in vivo data for weighting factor calibration. Including subject-specific strength adjustments improved models' predictions compared to standard static optimization, with errors in peak KCF less than 0.5 body weight for all subjects. Overall, combining clinical assessments of muscle strength with standard tools available in the OpenSim software package, such as inverse kinematics and static optimization, appears to be a practical method for predicting joint contact force that can be implemented for many applications. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Practical approach to subject-specific estimation of knee joint contact force

    PubMed Central

    Knarr, Brian A.; Higginson, Jill S.

    2015-01-01

    Compressive forces experienced at the knee can significantly contribute to cartilage degeneration. Musculoskeletal models enable predictions of the internal forces experienced at the knee, but validation is often not possible, as experimental data detailing loading at the knee joint is limited. Recently available data reporting compressive knee force through direct measurement using instrumented total knee replacements offer a unique opportunity to evaluate the accuracy of models. Previous studies have highlighted the importance of subject-specificity in increasing the accuracy of model predictions; however, these techniques may be unrealistic outside of a research setting. Therefore, the goal of our work was to identify a practical approach for accurate prediction of tibiofemoral knee contact force (KCF). Four methods for prediction of knee contact force were compared: (1) standard static optimization, (2) uniform muscle coordination weighting, (3) subject-specific muscle coordination weighting and (4) subject-specific strength adjustments. Walking trials for three subjects with instrumented knee replacements were used to evaluate the accuracy of model predictions. Predictions utilizing subject-specific muscle coordination weighting yielded the best agreement with experimental data, however this method required in vivo data for weighting factor calibration. Including subject-specific strength adjustments improved models’ predictions compared to standard static optimization, with errors in peak KCF less than 0.5 body weight for all subjects. Overall, combining clinical assessments of muscle strength with standard tools available in the OpenSim software package, such as inverse kinematics and static optimization, appears to be a practical method for predicting joint contact force that can be implemented for many applications. PMID:25952546

  11. Isoelectric points and points of zero charge of metal (hydr)oxides: 50years after Parks' review.

    PubMed

    Kosmulski, Marek

    2016-12-01

    The pH-dependent surface charging of metal (hydr)oxides is reviewed on the occasion of the 50th anniversary of the publication by G.A. Parks: "Isoelectric points of solid oxides, solid hydroxides, and aqueous hydroxo complex systems" in Chemical Reviews. The point of zero charge (PZC) and isoelectric point (IEP) became standard parameters to characterize metal oxides in aqueous dispersions, and they define adsorption (surface excess) of ions, stability against coagulation, rheological properties of dispersions, etc. They are commonly used in many branches of science including mineral processing, soil science, materials science, geochemistry, environmental engineering, and corrosion science. Parks established standard procedures and experimental conditions which are required to obtain reliable and reproducible values of PZC and IEP. The field is very active, and the number of related papers exceeds 300 a year, and the standards established by Parks remain still valid. Relevant experimental techniques improved over the years, especially the measurements of electrophoretic mobility became easier and more reliable, are the numerical values of PZC and IEP compiled by Parks were confirmed by contemporary publications with a few exceptions. The present paper is an up-to-date compilation of the values of PZC and IEP of metal oxides. Unlike in former reviews by the same author, which were more comprehensive, only limited number of selected results are presented and discussed here. On top of the results obtained by means of classical methods (titration and electrokinetic methods), new methods and correlations found over the recent 50years are presented. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Achieving 100% Efficient Postcolumn Hydride Generation for As Speciation Analysis by Atomic Fluorescence Spectrometry.

    PubMed

    Marschner, Karel; Musil, Stanislav; Dědina, Jiří

    2016-04-05

    An experimental setup consisting of a flow injection hydride generator coupled to an atomic fluorescence spectrometer was optimized in order to generate arsanes from tri- and pentavalent inorganic arsenic species (iAs(III), iAs(V)), monomethylarsonic acid (MAs(V)), and dimethylarsinic acid (DMAs(V)) with 100% efficiency with the use of only HCl and NaBH4 as the reagents. The optimal concentration of HCl was 2 mol L(-1); the optimal concentration of NaBH4 was 2.5% (m/v), and the volume of the reaction coil was 8.9 mL. To prevent excessive signal noise due to fluctuations of hydride supply to an atomizer, a new design of a gas-liquid separator was implemented. The optimized experimental setup was subsequently interfaced to HPLC and employed for speciation analysis of arsenic. Two chromatography columns were tested: (i) ion-pair chromatography and (ii) ion exchange chromatography. The latter offered much better results for human urine samples without a need for sample dilution. Due to the equal hydride generation efficiency (and thus the sensitivities) of all As species, a single species standardization by DMAs(V) standard was feasible. The limits of detection for iAs(III), iAs(V), MAs(V), and DMAs(V) were 40, 97, 57, and 55 pg mL(-1), respectively. Accuracy of the method was tested by the analysis of the standard reference material (human urine NIST 2669), and the method was also verified by the comparative analyses of human urine samples collected from five individuals with an independent reference method.

  13. An experimental topographic amplification study at Los Alamos National Laboratory using ambient vibrations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stolte, Andrew C.; Cox, Brady R.; Lee, Richard C.

    An experimental study aimed at investigating potential topographic amplification of seismic waves was conducted on a 50-m-tall and 185-m-wide soft-rock ridge located at Los Alamos National Laboratory near Los Alamos, New Mexico. Ten portable broadband seismograph stations were placed in arrays across the ridge and left to record ambient vibration data for ~9 hours. Clear evidence of topographic amplification was observed by comparing spectral ratios calculated from ambient noise recordings at the toe, slope, and crest of the instrumented ridge. The inferred resonance frequency of the ridge obtained from the experimental recordings was found to agree well with several simplemore » estimates of the theoretical resonance frequency based on its geometry and stiffness. Results support the feasibility of quantifying the frequency range of topographic amplification solely using ambient vibrations, rather than strong or weak ground motions. Additionally, comparisons have been made between a number of widely used experimental methods for quantifying topographic effects, such as the standard spectral ratio, median reference method, and horizontal-to-vertical spectral ratio. As a result, differences in the amplification and frequency range of topographic effects indicated by these methods highlight the importance of choosing a reference condition that is appropriate for the site-specific conditions and goals associated with an experimental topographic amplification study.« less

  14. An experimental topographic amplification study at Los Alamos National Laboratory using ambient vibrations

    DOE PAGES

    Stolte, Andrew C.; Cox, Brady R.; Lee, Richard C.

    2017-03-14

    An experimental study aimed at investigating potential topographic amplification of seismic waves was conducted on a 50-m-tall and 185-m-wide soft-rock ridge located at Los Alamos National Laboratory near Los Alamos, New Mexico. Ten portable broadband seismograph stations were placed in arrays across the ridge and left to record ambient vibration data for ~9 hours. Clear evidence of topographic amplification was observed by comparing spectral ratios calculated from ambient noise recordings at the toe, slope, and crest of the instrumented ridge. The inferred resonance frequency of the ridge obtained from the experimental recordings was found to agree well with several simplemore » estimates of the theoretical resonance frequency based on its geometry and stiffness. Results support the feasibility of quantifying the frequency range of topographic amplification solely using ambient vibrations, rather than strong or weak ground motions. Additionally, comparisons have been made between a number of widely used experimental methods for quantifying topographic effects, such as the standard spectral ratio, median reference method, and horizontal-to-vertical spectral ratio. As a result, differences in the amplification and frequency range of topographic effects indicated by these methods highlight the importance of choosing a reference condition that is appropriate for the site-specific conditions and goals associated with an experimental topographic amplification study.« less

  15. Evaluation of Normalization Methods to Pave the Way Towards Large-Scale LC-MS-Based Metabolomics Profiling Experiments

    PubMed Central

    Valkenborg, Dirk; Baggerman, Geert; Vanaerschot, Manu; Witters, Erwin; Dujardin, Jean-Claude; Burzykowski, Tomasz; Berg, Maya

    2013-01-01

    Abstract Combining liquid chromatography-mass spectrometry (LC-MS)-based metabolomics experiments that were collected over a long period of time remains problematic due to systematic variability between LC-MS measurements. Until now, most normalization methods for LC-MS data are model-driven, based on internal standards or intermediate quality control runs, where an external model is extrapolated to the dataset of interest. In the first part of this article, we evaluate several existing data-driven normalization approaches on LC-MS metabolomics experiments, which do not require the use of internal standards. According to variability measures, each normalization method performs relatively well, showing that the use of any normalization method will greatly improve data-analysis originating from multiple experimental runs. In the second part, we apply cyclic-Loess normalization to a Leishmania sample. This normalization method allows the removal of systematic variability between two measurement blocks over time and maintains the differential metabolites. In conclusion, normalization allows for pooling datasets from different measurement blocks over time and increases the statistical power of the analysis, hence paving the way to increase the scale of LC-MS metabolomics experiments. From our investigation, we recommend data-driven normalization methods over model-driven normalization methods, if only a few internal standards were used. Moreover, data-driven normalization methods are the best option to normalize datasets from untargeted LC-MS experiments. PMID:23808607

  16. Developing the Precision Magnetic Field for the E989 Muon g{2 Experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Matthias W.

    The experimental value ofmore » $$(g\\hbox{--}2)_\\mu$$ historically has been and contemporarily remains an important probe into the Standard Model and proposed extensions. Previous measurements of $$(g\\hbox{--}2)_\\mu$$ exhibit a persistent statistical tension with calculations using the Standard Model implying that the theory may be incomplete and constraining possible extensions. The Fermilab Muon g-2 experiment, E989, endeavors to increase the precision over previous experiments by a factor of four and probe more deeply into the tension with the Standard Model. The $$(g\\hbox{--}2)_\\mu$$ experimental implementation measures two spin precession frequencies defined by the magnetic field, proton precession and muon precession. The value of $$(g\\hbox{--}2)_\\mu$$ is derived from a relationship between the two frequencies. The precision of magnetic field measurements and the overall magnetic field uniformity achieved over the muon storage volume are then two undeniably important aspects of the e xperiment in minimizing uncertainty. The current thesis details the methods employed to achieve magnetic field goals and results of the effort.« less

  17. Uncertainty Analysis of Instrument Calibration and Application

    NASA Technical Reports Server (NTRS)

    Tripp, John S.; Tcheng, Ping

    1999-01-01

    Experimental aerodynamic researchers require estimated precision and bias uncertainties of measured physical quantities, typically at 95 percent confidence levels. Uncertainties of final computed aerodynamic parameters are obtained by propagation of individual measurement uncertainties through the defining functional expressions. In this paper, rigorous mathematical techniques are extended to determine precision and bias uncertainties of any instrument-sensor system. Through this analysis, instrument uncertainties determined through calibration are now expressed as functions of the corresponding measurement for linear and nonlinear univariate and multivariate processes. Treatment of correlated measurement precision error is developed. During laboratory calibration, calibration standard uncertainties are assumed to be an order of magnitude less than those of the instrument being calibrated. Often calibration standards do not satisfy this assumption. This paper applies rigorous statistical methods for inclusion of calibration standard uncertainty and covariance due to the order of their application. The effects of mathematical modeling error on calibration bias uncertainty are quantified. The effects of experimental design on uncertainty are analyzed. The importance of replication is emphasized, techniques for estimation of both bias and precision uncertainties using replication are developed. Statistical tests for stationarity of calibration parameters over time are obtained.

  18. Data Mining of Macromolecular Structures.

    PubMed

    van Beusekom, Bart; Perrakis, Anastassis; Joosten, Robbie P

    2016-01-01

    The use of macromolecular structures is widespread for a variety of applications, from teaching protein structure principles all the way to ligand optimization in drug development. Applying data mining techniques on these experimentally determined structures requires a highly uniform, standardized structural data source. The Protein Data Bank (PDB) has evolved over the years toward becoming the standard resource for macromolecular structures. However, the process selecting the data most suitable for specific applications is still very much based on personal preferences and understanding of the experimental techniques used to obtain these models. In this chapter, we will first explain the challenges with data standardization, annotation, and uniformity in the PDB entries determined by X-ray crystallography. We then discuss the specific effect that crystallographic data quality and model optimization methods have on structural models and how validation tools can be used to make informed choices. We also discuss specific advantages of using the PDB_REDO databank as a resource for structural data. Finally, we will provide guidelines on how to select the most suitable protein structure models for detailed analysis and how to select a set of structure models suitable for data mining.

  19. Dirt feedlot residue experiments. Quarterly progress report, December 1977--March 1978

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turk, M.

    1978-04-01

    Performance of the mobile fermentation system is reported. It made use of aged pen residue at the nominal loading rate of 0.25 lbs. volatile solids/ft./sup 3//day with a 10-day retention time and a fermentation temperature of 57/sup 0/C. Results of an experimental cattle feeding trial utilizing the protein in the fermentor liquid effluent as a replacement for standard protein supplements were encouraging. The evaluation of the capture efficiency of the system centrifuge both with and without a chemical flocculant was completed. An experimental cattle feeding trial utilizing the protein fermentation product (PFP) harvested by the centrifuge as replacement for themore » standard protein supplementwas initiated. The characterization of the cattle residues found in various cattle pens, feedlots, and locations was continued. An investigation was initiated into methods of separating the organic content of the feedlot residue from the sand and grit content. (JGB)« less

  20. Boundary Electron and Beta Dosimetry-Quantification of the Effects of Dissimilar Media on Absorbed Dose

    NASA Astrophysics Data System (ADS)

    Nunes, Josane C.

    1991-02-01

    This work quantifies the changes effected in electron absorbed dose to a soft-tissue equivalent medium when part of this medium is replaced by a material that is not soft -tissue equivalent. That is, heterogeneous dosimetry is addressed. Radionuclides which emit beta particles are the electron sources of primary interest. They are used in brachytherapy and in nuclear medicine: for example, beta -ray applicators made with strontium-90 are employed in certain ophthalmic treatments and iodine-131 is used to test thyroid function. More recent medical procedures under development and which involve beta radionuclides include radioimmunotherapy and radiation synovectomy; the first is a cancer modality and the second deals with the treatment of rheumatoid arthritis. In addition, the possibility of skin surface contamination exists whenever there is handling of radioactive material. Determination of absorbed doses in the examples of the preceding paragraph requires considering boundaries of interfaces. Whilst the Monte Carlo method can be applied to boundary calculations, for routine work such as in clinical situations, or in other circumstances where doses need to be determined quickly, analytical dosimetry would be invaluable. Unfortunately, few analytical methods for boundary beta dosimetry exist. Furthermore, the accuracy of results from both Monte Carlo and analytical methods has to be assessed. Although restricted to one radionuclide, phosphorus -32, the experimental data obtained in this work serve several purposes, one of which is to provide standards against which calculated results can be tested. The experimental data also contribute to the relatively sparse set of published boundary dosimetry data. At the same time, they may be useful in developing analytical boundary dosimetry methodology. The first application of the experimental data is demonstrated. Results from two Monte Carlo codes and two analytical methods, which were developed elsewhere, are compared with experimental data. Monte Carlo results compare satisfactory with experimental results for the boundaries considered. The agreement with experimental results for air interfaces is of particular interest because of discrepancies reported previously by another investigator who used data obtained from a different experimental technique. Results from one of the analytical methods differ significantly from the experimental data obtained here. The second analytical method provided data which approximate experimental results to within 30%. This is encouraging but it remains to be determined whether this method performs equally well for other source energies.

  1. Measurement of two-photon-absorption spectra through nonlinear fluorescence produced by a line-shaped excitation beam.

    PubMed

    Hasani, E; Parravicini, J; Tartara, L; Tomaselli, A; Tomassini, D

    2018-05-01

    We propose an innovative experimental approach to estimate the two-photon absorption (TPA) spectrum of a fluorescent material. Our method develops the standard indirect fluorescence-based method for the TPA measurement by employing a line-shaped excitation beam, generating a line-shaped fluorescence emission. Such a configuration, which requires a relatively high amount of optical power, permits to have a greatly increased fluorescence signal, thus avoiding the photon counterdetection devices usually used in these measurements, and allowing to employ detectors such as charge-coupled device (CCD) cameras. The method is finally tested on a fluorescent isothiocyanate sample, whose TPA spectrum, which is measured with the proposed technique, is compared with the TPA spectra reported in the literature, confirming the validity of our experimental approach. © 2018 The Authors Journal of Microscopy © 2018 Royal Microscopical Society.

  2. Incompressible Navier-Stokes Computations with Heat Transfer

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin; Kwak, Dochan; Rogers, Stuart; Kutler, Paul (Technical Monitor)

    1994-01-01

    The existing pseudocompressibility method for the system of incompressible Navier-Stokes equations is extended to heat transfer problems by including the energy equation. The solution method is based on the pseudo compressibility approach and uses an implicit-upwind differencing scheme together with the Gauss-Seidel line relaxation method. Current computations use one-equation Baldwin-Barth turbulence model which is derived from a simplified form of the standard k-epsilon model equations. Both forced and natural convection problems are examined. Numerical results from turbulent reattaching flow behind a backward-facing step will be compared against experimental measurements for the forced convection case. The validity of Boussinesq approximation to simplify the buoyancy force term will be investigated. The natural convective flow structure generated by heat transfer in a vertical rectangular cavity will be studied. The numerical results will be compared by experimental measurements by Morrison and Tran.

  3. Comparison between amperometric and true potentiometric end-point detection in the determination of water by the Karl Fischer method.

    PubMed

    Cedergren, A

    1974-06-01

    A rapid and sensitive method using true potentiometric end-point detection has been developed and compared with the conventional amperometric method for Karl Fischer determination of water. The effect of the sulphur dioxide concentration on the shape of the titration curve is shown. By using kinetic data it was possible to calculate the course of titrations and make comparisons with those found experimentally. The results prove that the main reaction is the slow step, both in the amperometric and the potentiometric method. Results obtained in the standardization of the Karl Fischer reagent showed that the potentiometric method, including titration to a preselected potential, gave a standard deviation of 0.001(1) mg of water per ml, the amperometric method using extrapolation 0.002(4) mg of water per ml and the amperometric titration to a pre-selected diffusion current 0.004(7) mg of water per ml. Theories and results dealing with dilution effects are presented. The time of analysis was 1-1.5 min for the potentiometric and 4-5 min for the amperometric method using extrapolation.

  4. Determination of Porosity in Shale by Double Headspace Extraction GC Analysis.

    PubMed

    Zhang, Chun-Yun; Li, Teng-Fei; Chai, Xin-Sheng; Xiao, Xian-Ming; Barnes, Donald

    2015-11-03

    This paper reports on a novel method for the rapid determination of the shale porosity by double headspace extraction gas chromatography (DHE-GC). Ground core samples of shale were placed into headspace vials and DHE-GC measurements of released methane gas were performed at a given time interval. A linear correlation between shale porosity and the ratio of consecutive GC signals was established both theoretically and experimentally by comparing with the results from the standard helium pycnometry method. The results showed that (a) the porosity of ground core samples of shale can be measured within 30 min; (b) the new method is not significantly affected by particle size of the sample; (c) the uncertainties of measured porosities of nine shale samples by the present method range from 0.31 to 0.46 p.u.; and (d) the results obtained by the DHE-GC method are in a good agreement with those from the standard helium pycnometry method. In short, the new DHE-GC method is simple, rapid, and accurate, making it a valuable tool for shale gas-related research and applications.

  5. The Same or Not the Same: Equivalence as an Issue in Educational Research

    NASA Astrophysics Data System (ADS)

    Lewis, Scott E.; Lewis, Jennifer E.

    2005-09-01

    In educational research, particularly in the sciences, a common research design calls for the establishment of a control and experimental group to determine the effectiveness of an intervention. As part of this design, it is often desirable to illustrate that the two groups were equivalent at the start of the intervention, based on measures such as standardized cognitive tests or student grades in prior courses. In this article we use SAT and ACT scores to illustrate a more robust way of testing equivalence. The method incorporates two one-sided t tests evaluating two null hypotheses, providing a stronger claim for equivalence than the standard method, which often does not address the possible problem of low statistical power. The two null hypotheses are based on the construction of an equivalence interval particular to the data, so the article also provides a rationale for and illustration of a procedure for constructing equivalence intervals. Our consideration of equivalence using this method also underscores the need to include sample sizes, standard deviations, and group means in published quantitative studies.

  6. Density-cluster NMA: A new protein decomposition technique for coarse-grained normal mode analysis.

    PubMed

    Demerdash, Omar N A; Mitchell, Julie C

    2012-07-01

    Normal mode analysis has emerged as a useful technique for investigating protein motions on long time scales. This is largely due to the advent of coarse-graining techniques, particularly Hooke's Law-based potentials and the rotational-translational blocking (RTB) method for reducing the size of the force-constant matrix, the Hessian. Here we present a new method for domain decomposition for use in RTB that is based on hierarchical clustering of atomic density gradients, which we call Density-Cluster RTB (DCRTB). The method reduces the number of degrees of freedom by 85-90% compared with the standard blocking approaches. We compared the normal modes from DCRTB against standard RTB using 1-4 residues in sequence in a single block, with good agreement between the two methods. We also show that Density-Cluster RTB and standard RTB perform well in capturing the experimentally determined direction of conformational change. Significantly, we report superior correlation of DCRTB with B-factors compared with 1-4 residue per block RTB. Finally, we show significant reduction in computational cost for Density-Cluster RTB that is nearly 100-fold for many examples. Copyright © 2012 Wiley Periodicals, Inc.

  7. Heme Iron Content in Lamb Meat Is Differentially Altered upon Boiling, Grilling, or Frying as Assessed by Four Distinct Analytical Methods

    PubMed Central

    Pourkhalili, Azin; Rahimi, Ebrahim

    2013-01-01

    Lamb meat is regarded as an important source of highly bioavailable iron (heme iron) in the Iranians diet. The main objective of this study is to evaluate the effect of traditional cooking methods on the iron changes in lamb meat. Four published experimental methods for the determination of heme iron were assessed analytically and statistically. Samples were selected from lambs' loin. Standard methods (AOAC) were used for proximate analysis. For measuring heme iron, the results of four experimental methods were compared regarding their compliance to Ferrozine method which was used for the determination of nonheme iron. Among three cooking methods, the lowest total iron and heme iron were found in boiling method. The heme iron proportions to the total iron in raw, boiled lamb meat and grilled, were counted as 65.70%, 67.75%, and 76.01%, receptively. Measuring the heme iron, the comparison of the methods in use showed that the method in which heme extraction solution was composed of 90% acetone, 18% water, and 2% hydrochloric acid was more appropriate and more correlated with the heme iron content calculated by the difference between total iron and nonheme iron. PMID:23737716

  8. Heme iron content in lamb meat is differentially altered upon boiling, grilling, or frying as assessed by four distinct analytical methods.

    PubMed

    Pourkhalili, Azin; Mirlohi, Maryam; Rahimi, Ebrahim

    2013-01-01

    Lamb meat is regarded as an important source of highly bioavailable iron (heme iron) in the Iranians diet. The main objective of this study is to evaluate the effect of traditional cooking methods on the iron changes in lamb meat. Four published experimental methods for the determination of heme iron were assessed analytically and statistically. Samples were selected from lambs' loin. Standard methods (AOAC) were used for proximate analysis. For measuring heme iron, the results of four experimental methods were compared regarding their compliance to Ferrozine method which was used for the determination of nonheme iron. Among three cooking methods, the lowest total iron and heme iron were found in boiling method. The heme iron proportions to the total iron in raw, boiled lamb meat and grilled, were counted as 65.70%, 67.75%, and 76.01%, receptively. Measuring the heme iron, the comparison of the methods in use showed that the method in which heme extraction solution was composed of 90% acetone, 18% water, and 2% hydrochloric acid was more appropriate and more correlated with the heme iron content calculated by the difference between total iron and nonheme iron.

  9. BiofOmics: a Web platform for the systematic and standardized collection of high-throughput biofilm data.

    PubMed

    Lourenço, Anália; Ferreira, Andreia; Veiga, Nuno; Machado, Idalina; Pereira, Maria Olivia; Azevedo, Nuno F

    2012-01-01

    Consortia of microorganisms, commonly known as biofilms, are attracting much attention from the scientific community due to their impact in human activity. As biofilm research grows to be a data-intensive discipline, the need for suitable bioinformatics approaches becomes compelling to manage and validate individual experiments, and also execute inter-laboratory large-scale comparisons. However, biofilm data is widespread across ad hoc, non-standardized individual files and, thus, data interchange among researchers, or any attempt of cross-laboratory experimentation or analysis, is hardly possible or even attempted. This paper presents BiofOmics, the first publicly accessible Web platform specialized in the management and analysis of data derived from biofilm high-throughput studies. The aim is to promote data interchange across laboratories, implementing collaborative experiments, and enable the development of bioinformatics tools in support of the processing and analysis of the increasing volumes of experimental biofilm data that are being generated. BiofOmics' data deposition facility enforces data structuring and standardization, supported by controlled vocabulary. Researchers are responsible for the description of the experiments, their results and conclusions. BiofOmics' curators interact with submitters only to enforce data structuring and the use of controlled vocabulary. Then, BiofOmics' search facility makes publicly available the profile and data associated with a submitted study so that any researcher can profit from these standardization efforts to compare similar studies, generate new hypotheses to be tested or even extend the conditions experimented in the study. BiofOmics' novelty lies in its support to standardized data deposition, the availability of computerizable data files and the free-of-charge dissemination of biofilm studies across the community. Hopefully, this will open promising research possibilities, namely the comparison of results between different laboratories, the reproducibility of methods within and between laboratories, and the development of guidelines and standardized protocols for biofilm formation operating procedures and analytical methods.

  10. Effects of estrogen on functional and neurological recovery after spinal cord injury: An experimental study with rats

    PubMed Central

    Letaif, Olavo Biraghi; Cristante, Alexandre Fogaça; de Barros Filho, Tarcísio Eloy Pessoa; Ferreira, Ricardo; dos Santos, Gustavo Bispo; da Rocha, Ivan Dias; Marcon, Raphael Martus

    2015-01-01

    OBJECTIVES: To evaluate the functional and histological effects of estrogen as a neuroprotective agent after a standard experimentally induced spinal cord lesion. METHODS: In this experimental study, 20 male Wistar rats were divided into two groups: one group with rats undergoing spinal cord injury (SCI) at T10 and receiving estrogen therapy with 17-beta estradiol (4mg/kg) immediately following the injury and after the placement of skin sutures and a control group with rats only subjected to SCI. A moderate standard experimentally induced SCI was produced using a computerized device that dropped a weight on the rat's spine from a height of 12.5 mm. Functional recovery was verified with the Basso, Beattie and Bresnahan scale on the 2nd, 7th, 14th, 21st, 28th, 35th and 42nd days after injury and by quantifying the motor-evoked potential on the 42nd day after injury. Histopathological evaluation of the SCI area was performed after euthanasia on the 42nd day. RESULTS: The experimental group showed a significantly greater functional improvement from the 28th to the 42nd day of observation compared to the control group. The experimental group showed statistically significant improvements in the motor-evoked potential compared with the control group. The results of pathological histomorphometry evaluations showed a better neurological recovery in the experimental group, with respect to the proportion and diameter of the quantified nerve fibers. CONCLUSIONS: Estrogen administration provided benefits in neurological and functional motor recovery in rats with SCI beginning at the 28th day after injury. PMID:26598084

  11. Preparation and application of a tyre-based activated carbon solid phase extraction of heavy metals in wastewater samples

    NASA Astrophysics Data System (ADS)

    Dimpe, K. Mogolodi; Ngila, J. C.; Nomngongo, Philiswa N.

    2018-06-01

    In this paper, the tyre-based activated carbon solid phase extraction (SPE) method was successfully developed for simultaneous preconcentration of metal ions in the model and real water samples before their determination using flame atomic absorption spectrometry (FAAS). The activation of carbon was achieved by chemical activation and the tyre-based activated carbon was used as a sorbent for solid phase extraction. The prepared activated carbon was characterized using the scanning electron microscope (SEM), Brunauer-Emmett-Teller (BET), and Fourier Transform Infrared spectroscopy. Moreover, optimization of the proposed method was performed by the two-level full factorial design (FFD). The FFD was chosen in order to fully investigate the effect of the experimental variables (pH, eluent concentration and sample flow rate) that significantly influence the preconcentration procedure. In this model, individual factors are considered along with their interactions. In addition, modelling of the experiments allowed simultaneous variation of all experimental factors investigated, reduced the required time and number of experimental runs which consequently led to the reduction of the overall required costs. Under optimized conditions, the limits of detection and quantification (LOD and LOQ) ranged 0.66-2.12 μg L-1and 1.78-5.34 μg L-1, respectively and the enrichment factor of 25 was obtained. The developed SPE/FAAS method was validated using CWW-TM-A and CWW-TM-B wastewater standard reference materials (SRMs). The procedure showed to be accurate with satisfactory recoveries ranging from 92 to 99%. The precision (repeatability) was lower than 4% in terms of the relative standard deviation (%RSD). The developed method proved to have the capability to be used in routine analysis of heavy metals in domestic and industrial wastewater samples. In addition, the developed method can be used as a final step (before being discharged to the rivers) in wastewater treatment process in order to keep our water bodies free from toxic metals.

  12. Geographic Gossip: Efficient Averaging for Sensor Networks

    NASA Astrophysics Data System (ADS)

    Dimakis, Alexandros D. G.; Sarwate, Anand D.; Wainwright, Martin J.

    Gossip algorithms for distributed computation are attractive due to their simplicity, distributed nature, and robustness in noisy and uncertain environments. However, using standard gossip algorithms can lead to a significant waste in energy by repeatedly recirculating redundant information. For realistic sensor network model topologies like grids and random geometric graphs, the inefficiency of gossip schemes is related to the slow mixing times of random walks on the communication graph. We propose and analyze an alternative gossiping scheme that exploits geographic information. By utilizing geographic routing combined with a simple resampling method, we demonstrate substantial gains over previously proposed gossip protocols. For regular graphs such as the ring or grid, our algorithm improves standard gossip by factors of $n$ and $\\sqrt{n}$ respectively. For the more challenging case of random geometric graphs, our algorithm computes the true average to accuracy $\\epsilon$ using $O(\\frac{n^{1.5}}{\\sqrt{\\log n}} \\log \\epsilon^{-1})$ radio transmissions, which yields a $\\sqrt{\\frac{n}{\\log n}}$ factor improvement over standard gossip algorithms. We illustrate these theoretical results with experimental comparisons between our algorithm and standard methods as applied to various classes of random fields.

  13. Comparative analysis of international standards for the fatigue testing of posterior spinal fixation systems: the importance of preload in ISO 12189.

    PubMed

    La Barbera, Luigi; Ottardi, Claudia; Villa, Tomaso

    2015-10-01

    Preclinical evaluation of the mechanical reliability of fixation devices is a mandatory activity before their introduction into market. There are two standardized protocols for preclinical testing of spinal implants. The American Society for Testing Materials (ASTM) recommends the F1717 standard, which describes a vertebrectomy condition that is relatively simple to implement, whereas the International Organization for Standardization (ISO) suggests the 12189 standard, which describes a more complex physiological anterior support-based setup. Moreover, ASTM F1717 is nowadays well established, whereas ISO 12189 has received little attention: A few studies tried to accurately describe the ISO experimental procedure through numeric models, but these studies totally neglect the recommended precompression step. This study aimed to build up a reliable, validated numeric model capable of describing the stress on the rods of a spinal fixator assembled according to ISO 12189 standard procedure. Such a model would more adequately represent the in vitro testing condition. This study used finite element (FE) simulations and experimental validation testing. An FE model of the ISO setup was built to calculate the stress on the rods. Simulation was validated by comparison with experimental strain gauges measurements. The same fixator has been previously virtually mounted in an L2-L4 FE model of the lumbar spine, and stresses in the rods were calculated when the spine was subjected to physiological forces and moments. The comparison between the FE predictions and experimental measurements is in good agreement, thus confirming the suitability of the FE method to evaluate the stresses in the device. The initial precompression induces a significant extension of the assembled construct. As the applied load increases, the initial extension is gradually compensated, so that at peak load the rods are bent in flexion: The final stress value predicted is thus reduced to about 50%, if compared with the previous model where the precompression was not considered. Neglecting the initial preload due to the assembly of the overall construct according to ISO 12189 standard could lead to an overestimation of the stress on the rods up to 50%. To correctly describe the state of stress on the posterior spinal fixator, tested according to the ISO procedure, it is important to take into account the initial preload due to the assembly of the overall construct. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. Evaluation of selective control information detection scheme in orthogonal frequency division multiplexing-based radio-over-fiber and visible light communication links

    NASA Astrophysics Data System (ADS)

    Dalarmelina, Carlos A.; Adegbite, Saheed A.; Pereira, Esequiel da V.; Nunes, Reginaldo B.; Rocha, Helder R. O.; Segatto, Marcelo E. V.; Silva, Jair A. L.

    2017-05-01

    Block-level detection is required to decode what may be classified as selective control information (SCI) such as control format indicator in 4G-long-term evolution systems. Using optical orthogonal frequency division multiplexing over radio-over-fiber (RoF) links, we report the experimental evaluation of an SCI detection scheme based on a time-domain correlation (TDC) technique in comparison with the conventional maximum likelihood (ML) approach. When compared with the ML method, it is shown that the TDC method improves detection performance over both 20 and 40 km of standard single mode fiber (SSMF) links. We also report a performance analysis of the TDC scheme in noisy visible light communication channel models after propagation through 40 km of SSMF. Experimental and simulation results confirm that the TDC method is attractive for practical orthogonal frequency division multiplexing-based RoF and fiber-wireless systems. Unlike the ML method, another key benefit of the TDC is that it requires no channel estimation.

  15. The SPR detection of Salmonella enteritidis in food using aptamers as recongnition elements

    NASA Astrophysics Data System (ADS)

    Di, W. T.; Du, X. W.; Pan, M. F.; Wang, J. P.

    2017-09-01

    In this experiment, a fast, accurate, non-destructive, unmarked and simple-operation detection method for Salmonella enteritidis in food was established by the BI-3000 plasma resonance biosensor (SPR). This article establishes a method of using nucleic acid aptamer as immune recognition element in SPR which can be employed to detect Salmonella enteritidis in food for the first time. The experimental conditions were screened and the experimental scheme was validated and applied. The best flow rate was 5μL/min, the best concentration of the aptamers was 180mM, and the best regenerating solution was the 20mM NaOH. This method had almost no cross-reactivity. Besides, we established a standard curve of Salmonella enteritidis and SPR signal, with the detection limit of 2 cfu/mL. Finally, we tested the samples of chicken, pork, shrimp and fish purchased from supermarkets. The method has the advantages of short time, low detection limit and easy operation, which can be used for a large number of food samples.

  16. Recovering the dynamics of root growth and development using novel image acquisition and analysis methods

    PubMed Central

    Wells, Darren M.; French, Andrew P.; Naeem, Asad; Ishaq, Omer; Traini, Richard; Hijazi, Hussein; Bennett, Malcolm J.; Pridmore, Tony P.

    2012-01-01

    Roots are highly responsive to environmental signals encountered in the rhizosphere, such as nutrients, mechanical resistance and gravity. As a result, root growth and development is very plastic. If this complex and vital process is to be understood, methods and tools are required to capture the dynamics of root responses. Tools are needed which are high-throughput, supporting large-scale experimental work, and provide accurate, high-resolution, quantitative data. We describe and demonstrate the efficacy of the high-throughput and high-resolution root imaging systems recently developed within the Centre for Plant Integrative Biology (CPIB). This toolset includes (i) robotic imaging hardware to generate time-lapse datasets from standard cameras under infrared illumination and (ii) automated image analysis methods and software to extract quantitative information about root growth and development both from these images and via high-resolution light microscopy. These methods are demonstrated using data gathered during an experimental study of the gravitropic response of Arabidopsis thaliana. PMID:22527394

  17. Recovering the dynamics of root growth and development using novel image acquisition and analysis methods.

    PubMed

    Wells, Darren M; French, Andrew P; Naeem, Asad; Ishaq, Omer; Traini, Richard; Hijazi, Hussein I; Hijazi, Hussein; Bennett, Malcolm J; Pridmore, Tony P

    2012-06-05

    Roots are highly responsive to environmental signals encountered in the rhizosphere, such as nutrients, mechanical resistance and gravity. As a result, root growth and development is very plastic. If this complex and vital process is to be understood, methods and tools are required to capture the dynamics of root responses. Tools are needed which are high-throughput, supporting large-scale experimental work, and provide accurate, high-resolution, quantitative data. We describe and demonstrate the efficacy of the high-throughput and high-resolution root imaging systems recently developed within the Centre for Plant Integrative Biology (CPIB). This toolset includes (i) robotic imaging hardware to generate time-lapse datasets from standard cameras under infrared illumination and (ii) automated image analysis methods and software to extract quantitative information about root growth and development both from these images and via high-resolution light microscopy. These methods are demonstrated using data gathered during an experimental study of the gravitropic response of Arabidopsis thaliana.

  18. Evaluating the quality of a cell counting measurement process via a dilution series experimental design.

    PubMed

    Sarkar, Sumona; Lund, Steven P; Vyzasatya, Ravi; Vanguri, Padmavathy; Elliott, John T; Plant, Anne L; Lin-Gibson, Sheng

    2017-12-01

    Cell counting measurements are critical in the research, development and manufacturing of cell-based products, yet determining cell quantity with accuracy and precision remains a challenge. Validating and evaluating a cell counting measurement process can be difficult because of the lack of appropriate reference material. Here we describe an experimental design and statistical analysis approach to evaluate the quality of a cell counting measurement process in the absence of appropriate reference materials or reference methods. The experimental design is based on a dilution series study with replicate samples and observations as well as measurement process controls. The statistical analysis evaluates the precision and proportionality of the cell counting measurement process and can be used to compare the quality of two or more counting methods. As an illustration of this approach, cell counting measurement processes (automated and manual methods) were compared for a human mesenchymal stromal cell (hMSC) preparation. For the hMSC preparation investigated, results indicated that the automated method performed better than the manual counting methods in terms of precision and proportionality. By conducting well controlled dilution series experimental designs coupled with appropriate statistical analysis, quantitative indicators of repeatability and proportionality can be calculated to provide an assessment of cell counting measurement quality. This approach does not rely on the use of a reference material or comparison to "gold standard" methods known to have limited assurance of accuracy and precision. The approach presented here may help the selection, optimization, and/or validation of a cell counting measurement process. Published by Elsevier Inc.

  19. An assessment of the liquid-gas partitioning behavior of major wastewater odorants using two comparative experimental approaches: liquid sample-based vaporization vs. impinger-based dynamic headspace extraction into sorbent tubes.

    PubMed

    Iqbal, Mohammad Asif; Kim, Ki-Hyun; Szulejko, Jan E; Cho, Jinwoo

    2014-01-01

    The gas-liquid partitioning behavior of major odorants (acetic acid, propionic acid, isobutyric acid, n-butyric acid, i-valeric acid, n-valeric acid, hexanoic acid, phenol, p-cresol, indole, skatole, and toluene (as a reference)) commonly found in microbially digested wastewaters was investigated by two experimental approaches. Firstly, a simple vaporization method was applied to measure the target odorants dissolved in liquid samples with the aid of sorbent tube/thermal desorption/gas chromatography/mass spectrometry. As an alternative method, an impinger-based dynamic headspace sampling method was also explored to measure the partitioning of target odorants between the gas and liquid phases with the same detection system. The relative extraction efficiency (in percent) of the odorants by dynamic headspace sampling was estimated against the calibration results derived by the vaporization method. Finally, the concentrations of the major odorants in real digested wastewater samples were also analyzed using both analytical approaches. Through a parallel application of the two experimental methods, we intended to develop an experimental approach to be able to assess the liquid-to-gas phase partitioning behavior of major odorants in a complex wastewater system. The relative sensitivity of the two methods expressed in terms of response factor ratios (RFvap/RFimp) of liquid standard calibration between vaporization and impinger-based calibrations varied widely from 981 (skatole) to 6,022 (acetic acid). Comparison of this relative sensitivity thus highlights the rather low extraction efficiency of the highly soluble and more acidic odorants from wastewater samples in dynamic headspace sampling.

  20. Spectral data compression using weighted principal component analysis with consideration of human visual system and light sources

    NASA Astrophysics Data System (ADS)

    Cao, Qian; Wan, Xiaoxia; Li, Junfeng; Liu, Qiang; Liang, Jingxing; Li, Chan

    2016-10-01

    This paper proposed two weight functions based on principal component analysis (PCA) to reserve more colorimetric information in spectral data compression process. One weight function consisted of the CIE XYZ color-matching functions representing the characteristic of the human visual system, while another was made up of the CIE XYZ color-matching functions of human visual system and relative spectral power distribution of the CIE standard illuminant D65. The improvement obtained from the proposed two methods were tested to compress and reconstruct the reflectance spectra of 1600 glossy Munsell color chips and 1950 Natural Color System color chips as well as six multispectral images. The performance was evaluated by the mean values of color difference under the CIE 1931 standard colorimetric observer and the CIE standard illuminant D65 and A. The mean values of root mean square errors between the original and reconstructed spectra were also calculated. The experimental results show that the proposed two methods significantly outperform the standard PCA and another two weighted PCA in the aspects of colorimetric reconstruction accuracy with very slight degradation in spectral reconstruction accuracy. In addition, weight functions with the CIE standard illuminant D65 can improve the colorimetric reconstruction accuracy compared to weight functions without the CIE standard illuminant D65.

  1. Simple and efficient machine learning frameworks for identifying protein-protein interaction relevant articles and experimental methods used to study the interactions.

    PubMed

    Agarwal, Shashank; Liu, Feifan; Yu, Hong

    2011-10-03

    Protein-protein interaction (PPI) is an important biomedical phenomenon. Automatically detecting PPI-relevant articles and identifying methods that are used to study PPI are important text mining tasks. In this study, we have explored domain independent features to develop two open source machine learning frameworks. One performs binary classification to determine whether the given article is PPI relevant or not, named "Simple Classifier", and the other one maps the PPI relevant articles with corresponding interaction method nodes in a standardized PSI-MI (Proteomics Standards Initiative-Molecular Interactions) ontology, named "OntoNorm". We evaluated our system in the context of BioCreative challenge competition using the standardized data set. Our systems are amongst the top systems reported by the organizers, attaining 60.8% F1-score for identifying relevant documents, and 52.3% F1-score for mapping articles to interaction method ontology. Our results show that domain-independent machine learning frameworks can perform competitively well at the tasks of detecting PPI relevant articles and identifying the methods that were used to study the interaction in such articles. Simple Classifier is available at http://sourceforge.net/p/simpleclassify/home/ and OntoNorm at http://sourceforge.net/p/ontonorm/home/.

  2. A fractional Fourier transform analysis of a bubble excited by an ultrasonic chirp.

    PubMed

    Barlow, Euan; Mulholland, Anthony J

    2011-11-01

    The fractional Fourier transform is proposed here as a model based, signal processing technique for determining the size of a bubble in a fluid. The bubble is insonified with an ultrasonic chirp and the radiated pressure field is recorded. This experimental bubble response is then compared with a series of theoretical model responses to identify the most accurate match between experiment and theory which allows the correct bubble size to be identified. The fractional Fourier transform is used to produce a more detailed description of each response, and two-dimensional cross correlation is then employed to identify the similarities between the experimental response and each theoretical response. In this paper the experimental bubble response is simulated by adding various levels of noise to the theoretical model output. The method is compared to the standard technique of using time-domain cross correlation. The proposed method is shown to be far more robust at correctly sizing the bubble and can cope with much lower signal to noise ratios.

  3. Animal experiments, vital forces and courtrooms: Mateu Orfila, François Magendie and the study of poisons in nineteenth-century France.

    PubMed

    Bertomeu-Sánchez, José Ramón

    2012-01-01

    The paper follows the lives of Mateu Orfila and François Magendie in early nineteenth-century Paris, focusing on their common interest in poisons. The first part deals with the striking similarities of their early careers: their medical training, their popular private lectures, and their first publications. The next section explores their experimental work on poisons by analyzing their views on physical and vital forces in living organisms and their ideas about the significance of animal experiments in medicine. The last part describes their contrasting research on the absorption of poisons and the divergences in their approaches, methods, aims, standards of proof, and intended audiences. The analysis highlights the connections between nineteenth-century courtrooms and experimental laboratories, and shows how forensic practice not only prompted animal experimentation but also provided a substantial body of information and new research methods for dealing with major theoretical issues like the absorption of poisons.

  4. Design Guidelines for Quiet Fans and Pumps for Space Vehicles

    NASA Technical Reports Server (NTRS)

    Lovell, John S.; Magliozzi, Bernard

    2008-01-01

    This document presents guidelines for the design of quiet fans and pumps of the class used on space vehicles. A simple procedure is presented for the prediction of fan noise over the meaningful frequency spectrum. A section also presents general design criteria for axial flow fans, squirrel cage fans, centrifugal fans, and centrifugal pumps. The basis for this report is an experimental program conducted by Hamilton Standard under NASA Contract NAS 9-12457. The derivations of the noise predicting methods used in this document are explained in Hamilton Standard Report SVHSER 6183, "Fan and Pump Noise Control," dated May 1973 (6).

  5. Low-energy elastic electron scattering from furan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khakoo, M. A.; Muse, J.; Ralphs, K.

    We report normalized experimental and theoretical differential cross sections for elastic electron scattering by C{sub 4}H{sub 4}O (furan) molecules from a collaborative project between several Brazilian theoretical groups and an experimental group at California State Fullerton, USA. The measurements are obtained by using the relative flow method with helium as the standard gas and a thin aperture target gas collimating source. The relative flow method is applied without the restriction imposed by the relative flow pressure condition on helium and the unknown gas. The experimental data were taken at incident electron energies of 1, 1.5, 1.73, 2, 2.7, 3, 5,more » 7, 10, 20, 30, and 50 eV and covered the angular range between 10 deg. and 130 deg. The measurements verify observed {pi}* shape resonances at 1.65{+-}0.05eV and 3.10{+-}0.05 eV scattering energies, in good agreement with the transmission electron data of Modelli and Burrow [J. Phys. Chem. A 108, 5721 (2004)]. Furthermore, the present results also indicated both resonances dominantly in the d-wave channel. The differential cross sections are integrated in the standard way to obtain integral elastic cross sections and momentum transfer cross sections. The calculations employed the Schwinger multichannel method with pseudopotentials and were performed in the static-exchange and in the static-exchange plus polarization approximations. The calculated integral and momentum transfer cross sections clearly revealed the presence of two shape resonances located at 1.95 and 3.56 eV and ascribed to the B{sub 1} and A{sub 2} symmetries of the C{sub 2v} point group, respectively, in very good agreement with the experimental findings. Overall agreement between theory and experiment regarding the differential, momentum transfer, and integral cross sections is very good, especially for energies below 10 eV.« less

  6. Low-energy elastic electron scattering from furan

    NASA Astrophysics Data System (ADS)

    Khakoo, M. A.; Muse, J.; Ralphs, K.; da Costa, R. F.; Bettega, M. H. F.; Lima, M. A. P.

    2010-06-01

    We report normalized experimental and theoretical differential cross sections for elastic electron scattering by C4H4O (furan) molecules from a collaborative project between several Brazilian theoretical groups and an experimental group at California State Fullerton, USA. The measurements are obtained by using the relative flow method with helium as the standard gas and a thin aperture target gas collimating source. The relative flow method is applied without the restriction imposed by the relative flow pressure condition on helium and the unknown gas. The experimental data were taken at incident electron energies of 1, 1.5, 1.73, 2, 2.7, 3, 5, 7, 10, 20, 30, and 50 eV and covered the angular range between 10° and 130°. The measurements verify observed π* shape resonances at 1.65±0.05eV and 3.10±0.05 eV scattering energies, in good agreement with the transmission electron data of Modelli and Burrow [J. Phys. Chem. AJPCAFH 1089-563910.1021/jp048759a 108, 5721 (2004)]. Furthermore, the present results also indicated both resonances dominantly in the d-wave channel. The differential cross sections are integrated in the standard way to obtain integral elastic cross sections and momentum transfer cross sections. The calculations employed the Schwinger multichannel method with pseudopotentials and were performed in the static-exchange and in the static-exchange plus polarization approximations. The calculated integral and momentum transfer cross sections clearly revealed the presence of two shape resonances located at 1.95 and 3.56 eV and ascribed to the B1 and A2 symmetries of the C2v point group, respectively, in very good agreement with the experimental findings. Overall agreement between theory and experiment regarding the differential, momentum transfer, and integral cross sections is very good, especially for energies below 10 eV.

  7. Hip Hop HEALS: Pilot Study of a Culturally Targeted Calorie Label Intervention to Improve Food Purchases of Children

    ERIC Educational Resources Information Center

    Williams, Olajide; DeSorbo, Alexandra; Sawyer, Vanessa; Apakama, Donald; Shaffer, Michele; Gerin, William; Noble, James

    2016-01-01

    Objectives: We explored the effect of a culturally targeted calorie label intervention on food purchasing behavior of elementary school students. Method: We used a quasi-experimental design with two intervention schools and one control school to assess food purchases of third through fifth graders at standardized school food sales before and after…

  8. Grass and forest potential evapotranspiration comparison using five methods in the Atlantic Coastal Plain

    Treesearch

    Devendra Amatya; Andy Harrison

    2016-01-01

    Studies examining potential evapotranspiration (PET) for a mature forest reference compared with standard grass are limited in the current literature. Data from three long-term weather stations located within 10 km of each other in the USDA Forest Service Santee Experimental Forest (SEF) in coastal South Carolina were used to (1) evaluate monthly and annual PET...

  9. Speech Abilities in Preschool Children with Speech Sound Disorder with and without Co-Occurring Language Impairment

    ERIC Educational Resources Information Center

    Macrae, Toby; Tyler, Ann A.

    2014-01-01

    Purpose: The authors compared preschool children with co-occurring speech sound disorder (SSD) and language impairment (LI) to children with SSD only in their numbers and types of speech sound errors. Method: In this post hoc quasi-experimental study, independent samples t tests were used to compare the groups in the standard score from different…

  10. Measuring Student Growth within a Merit-Pay Evaluation System: Perceived Effects on Music Teacher Motivation Career Commitment

    ERIC Educational Resources Information Center

    Munroe, Angela

    2017-01-01

    In this experimental study, music teachers from a large school district were randomly assigned to one of two hypothetical conditions reflecting different methods for measuring student growth under a merit pay compensation system. In Scenario A, half of a teacher's effectiveness rating was based on student standardized test scores in reading,…

  11. A loop-gap resonator for chirality-sensitive nuclear magneto-electric resonance (NMER)

    NASA Astrophysics Data System (ADS)

    Garbacz, Piotr; Fischer, Peer; Krämer, Steffen

    2016-09-01

    Direct detection of molecular chirality is practically impossible by methods of standard nuclear magnetic resonance (NMR) that is based on interactions involving magnetic-dipole and magnetic-field operators. However, theoretical studies provide a possible direct probe of chirality by exploiting an enantiomer selective additional coupling involving magnetic-dipole, magnetic-field, and electric field operators. This offers a way for direct experimental detection of chirality by nuclear magneto-electric resonance (NMER). This method uses both resonant magnetic and electric radiofrequency (RF) fields. The weakness of the chiral interaction though requires a large electric RF field and a small transverse RF magnetic field over the sample volume, which is a non-trivial constraint. In this study, we present a detailed study of the NMER concept and a possible experimental realization based on a loop-gap resonator. For this original device, the basic principle and numerical studies as well as fabrication and measurements of the frequency dependence of the scattering parameter are reported. By simulating the NMER spin dynamics for our device and taking the 19F NMER signal of enantiomer-pure 1,1,1-trifluoropropan-2-ol, we predict a chirality induced NMER signal that accounts for 1%-5% of the standard achiral NMR signal.

  12. Dispersive solid-phase extraction for the determination of trace organochlorine pesticides in apple juices using reduced graphene oxide coated with ZnO nanocomposites as sorbent.

    PubMed

    Sun, Ting; Sun, Hefeng; Zhao, Feng

    2017-09-01

    In this work, reduced graphene oxide coated with ZnO nanocomposites was used as an efficient sorbent of dispersive solid-phase extraction and successfully applied for the extraction of organochlorine pesticides from apple juice followed by gas chromatography with mass spectrometry. Several experimental parameters affecting the extraction efficiencies, including the amount of adsorbent, extraction time, and the pH of the sample solution, as well as the type and volume of eluent solvent, were investigated and optimized. Under the optimal experimental conditions, good linearity existed in the range of 1.0-200.0 ng/mL for all the analytes with the correlation coefficients (R 2 ) ranging from 0.9964 to 0.9994. The limits of detection of the method for the compounds were 0.011-0.053 ng/mL. Good reproducibilities were acquired with relative standard deviations below 8.7% for both intraday and interday precision. The recoveries of the method were in the range of 78.1-105.8% with relative standard deviations of 3.3-6.9%. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Inferring Molecular Processes Heterogeneity from Transcriptional Data.

    PubMed

    Gogolewski, Krzysztof; Wronowska, Weronika; Lech, Agnieszka; Lesyng, Bogdan; Gambin, Anna

    2017-01-01

    RNA microarrays and RNA-seq are nowadays standard technologies to study the transcriptional activity of cells. Most studies focus on tracking transcriptional changes caused by specific experimental conditions. Information referring to genes up- and downregulation is evaluated analyzing the behaviour of relatively large population of cells by averaging its properties. However, even assuming perfect sample homogeneity, different subpopulations of cells can exhibit diverse transcriptomic profiles, as they may follow different regulatory/signaling pathways. The purpose of this study is to provide a novel methodological scheme to account for possible internal, functional heterogeneity in homogeneous cell lines, including cancer ones. We propose a novel computational method to infer the proportion between subpopulations of cells that manifest various functional behaviour in a given sample. Our method was validated using two datasets from RNA microarray experiments. Both experiments aimed to examine cell viability in specific experimental conditions. The presented methodology can be easily extended to RNA-seq data as well as other molecular processes. Moreover, it complements standard tools to indicate most important networks from transcriptomic data and in particular could be useful in the analysis of cancer cell lines affected by biologically active compounds or drugs.

  14. Inferring Molecular Processes Heterogeneity from Transcriptional Data

    PubMed Central

    Wronowska, Weronika; Lesyng, Bogdan; Gambin, Anna

    2017-01-01

    RNA microarrays and RNA-seq are nowadays standard technologies to study the transcriptional activity of cells. Most studies focus on tracking transcriptional changes caused by specific experimental conditions. Information referring to genes up- and downregulation is evaluated analyzing the behaviour of relatively large population of cells by averaging its properties. However, even assuming perfect sample homogeneity, different subpopulations of cells can exhibit diverse transcriptomic profiles, as they may follow different regulatory/signaling pathways. The purpose of this study is to provide a novel methodological scheme to account for possible internal, functional heterogeneity in homogeneous cell lines, including cancer ones. We propose a novel computational method to infer the proportion between subpopulations of cells that manifest various functional behaviour in a given sample. Our method was validated using two datasets from RNA microarray experiments. Both experiments aimed to examine cell viability in specific experimental conditions. The presented methodology can be easily extended to RNA-seq data as well as other molecular processes. Moreover, it complements standard tools to indicate most important networks from transcriptomic data and in particular could be useful in the analysis of cancer cell lines affected by biologically active compounds or drugs. PMID:29362714

  15. Validation of GC and HPLC systems for residue studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, M.

    1995-12-01

    For residue studies, GC and HPLC system performance must be validated prior to and during use. One excellent measure of system performance is the standard curve and associated chromatograms used to construct that curve. The standard curve is a model of system response to an analyte over a specific time period, and is prima facia evidence of system performance beginning at the auto sampler and proceeding through the injector, column, detector, electronics, data-capture device, and printer/plotter. This tool measures the performance of the entire chromatographic system; its power negates most of the benefits associated with costly and time-consuming validation ofmore » individual system components. Other measures of instrument and method validation will be discussed, including quality control charts and experimental designs for method validation.« less

  16. Towards an In-Beam Measurement of the Neutron Lifetime to 1 Second

    NASA Astrophysics Data System (ADS)

    Mulholland, Jonathan

    2014-03-01

    A precise value for the neutron lifetime is required for consistency tests of the Standard Model and is an essential parameter in the theory of Big Bang Nucleosynthesis. A new measurement of the neutron lifetime using the in-beam method is planned at the National Institute of Standards and Technology Center for Neutron Research. The systematic effects associated with the in-beam method are markedly different than those found in storage experiments utilizing ultracold neutrons. Experimental improvements, specifically recent advances in the determination of absolute neutron fluence, should permit an overall uncertainty of 1 second on the neutron lifetime. The dependence of the primordial mass fraction on the neutron lifetime, technical improvements of the in-beam technique, and the path toward improving the precision of the new measurement will be discussed.

  17. A strategy for evaluating pathway analysis methods.

    PubMed

    Yu, Chenggang; Woo, Hyung Jun; Yu, Xueping; Oyama, Tatsuya; Wallqvist, Anders; Reifman, Jaques

    2017-10-13

    Researchers have previously developed a multitude of methods designed to identify biological pathways associated with specific clinical or experimental conditions of interest, with the aim of facilitating biological interpretation of high-throughput data. Before practically applying such pathway analysis (PA) methods, we must first evaluate their performance and reliability, using datasets where the pathways perturbed by the conditions of interest have been well characterized in advance. However, such 'ground truths' (or gold standards) are often unavailable. Furthermore, previous evaluation strategies that have focused on defining 'true answers' are unable to systematically and objectively assess PA methods under a wide range of conditions. In this work, we propose a novel strategy for evaluating PA methods independently of any gold standard, either established or assumed. The strategy involves the use of two mutually complementary metrics, recall and discrimination. Recall measures the consistency of the perturbed pathways identified by applying a particular analysis method to an original large dataset and those identified by the same method to a sub-dataset of the original dataset. In contrast, discrimination measures specificity-the degree to which the perturbed pathways identified by a particular method to a dataset from one experiment differ from those identifying by the same method to a dataset from a different experiment. We used these metrics and 24 datasets to evaluate six widely used PA methods. The results highlighted the common challenge in reliably identifying significant pathways from small datasets. Importantly, we confirmed the effectiveness of our proposed dual-metric strategy by showing that previous comparative studies corroborate the performance evaluations of the six methods obtained by our strategy. Unlike any previously proposed strategy for evaluating the performance of PA methods, our dual-metric strategy does not rely on any ground truth, either established or assumed, of the pathways perturbed by a specific clinical or experimental condition. As such, our strategy allows researchers to systematically and objectively evaluate pathway analysis methods by employing any number of datasets for a variety of conditions.

  18. The impact of using standardized patients in psychiatric cases on the levels of motivation and perceived learning of the nursing students.

    PubMed

    Sarikoc, Gamze; Ozcan, Celale Tangul; Elcin, Melih

    2017-04-01

    The use of standardized patients is not very common in psychiatric nursing education and there has been no study conducted in Turkey. This study evaluated the impact of using standardized patients in psychiatric cases on the levels of motivation and perceived learning of the nursing students. This manuscript addressed the quantitative aspect of a doctoral thesis study in which both quantitative and qualitative methods were used. A pre-test and post-test were employed in the quantitative analysis in a randomized and controlled study design. The motivation scores, and interim and post-test scores for perceived learning were higher in the experimental group compared to pre-test scores and the scores of the control group. The students in the experimental group reported that they felt more competent about practical training in clinical psychiatry, as well as in performing interviews with patients having mental problems, and reported less anxiety about performing an interview when compared to students in the control group. It is considered that the inclusion of standardized patient methodology in the nursing education curriculum in order to improve the knowledge level and skills of students would be beneficial in the training of mental health nurses. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. A prestorage method to measure neutron transmission of ultracold neutron guides

    NASA Astrophysics Data System (ADS)

    Blau, B.; Daum, M.; Fertl, M.; Geltenbort, P.; Göltl, L.; Henneck, R.; Kirch, K.; Knecht, A.; Lauss, B.; Schmidt-Wellenburg, P.; Zsigmond, G.

    2016-01-01

    There are worldwide efforts to search for physics beyond the Standard Model of particle physics. Precision experiments using ultracold neutrons (UCN) require very high intensities of UCN. Efficient transport of UCN from the production volume to the experiment is therefore of great importance. We have developed a method using prestored UCN in order to quantify UCN transmission in tubular guides. This method simulates the final installation at the Paul Scherrer Institute's UCN source where neutrons are stored in an intermediate storage vessel serving three experimental ports. This method allowed us to qualify UCN guides for their intended use and compare their properties.

  20. Adaptive image coding based on cubic-spline interpolation

    NASA Astrophysics Data System (ADS)

    Jiang, Jian-Xing; Hong, Shao-Hua; Lin, Tsung-Ching; Wang, Lin; Truong, Trieu-Kien

    2014-09-01

    It has been investigated that at low bit rates, downsampling prior to coding and upsampling after decoding can achieve better compression performance than standard coding algorithms, e.g., JPEG and H. 264/AVC. However, at high bit rates, the sampling-based schemes generate more distortion. Additionally, the maximum bit rate for the sampling-based scheme to outperform the standard algorithm is image-dependent. In this paper, a practical adaptive image coding algorithm based on the cubic-spline interpolation (CSI) is proposed. This proposed algorithm adaptively selects the image coding method from CSI-based modified JPEG and standard JPEG under a given target bit rate utilizing the so called ρ-domain analysis. The experimental results indicate that compared with the standard JPEG, the proposed algorithm can show better performance at low bit rates and maintain the same performance at high bit rates.

  1. Numerical method for high accuracy index of refraction estimation for spectro-angular surface plasmon resonance systems.

    PubMed

    Alleyne, Colin J; Kirk, Andrew G; Chien, Wei-Yin; Charette, Paul G

    2008-11-24

    An eigenvector analysis based algorithm is presented for estimating refractive index changes from 2-D reflectance/dispersion images obtained with spectro-angular surface plasmon resonance systems. High resolution over a large dynamic range can be achieved simultaneously. The method performs well in simulations with noisy data maintaining an error of less than 10(-8) refractive index units with up to six bits of noise on 16 bit quantized image data. Experimental measurements show that the method results in a much higher signal to noise ratio than the standard 1-D weighted centroid dip finding algorithm.

  2. An Illumination-Adaptive Colorimetric Measurement Using Color Image Sensor

    NASA Astrophysics Data System (ADS)

    Lee, Sung-Hak; Lee, Jong-Hyub; Sohng, Kyu-Ik

    An image sensor for a use of colorimeter is characterized based on the CIE standard colorimetric observer. We use the method of least squares to derive a colorimetric characterization matrix between RGB output signals and CIE XYZ tristimulus values. This paper proposes an adaptive measuring method to obtain the chromaticity of colored scenes and illumination through a 3×3 camera transfer matrix under a certain illuminant. Camera RGB outputs, sensor status values, and photoelectric characteristic are used to obtain the chromaticity. Experimental results show that the proposed method is valid in the measuring performance.

  3. [Comparison of two algorithms for development of design space-overlapping method and probability-based method].

    PubMed

    Shao, Jing-Yuan; Qu, Hai-Bin; Gong, Xing-Chu

    2018-05-01

    In this work, two algorithms (overlapping method and the probability-based method) for design space calculation were compared by using the data collected from extraction process of Codonopsis Radix as an example. In the probability-based method, experimental error was simulated to calculate the probability of reaching the standard. The effects of several parameters on the calculated design space were studied, including simulation number, step length, and the acceptable probability threshold. For the extraction process of Codonopsis Radix, 10 000 times of simulation and 0.02 for the calculation step length can lead to a satisfactory design space. In general, the overlapping method is easy to understand, and can be realized by several kinds of commercial software without coding programs, but the reliability of the process evaluation indexes when operating in the design space is not indicated. Probability-based method is complex in calculation, but can provide the reliability to ensure that the process indexes can reach the standard within the acceptable probability threshold. In addition, there is no probability mutation in the edge of design space by probability-based method. Therefore, probability-based method is recommended for design space calculation. Copyright© by the Chinese Pharmaceutical Association.

  4. [Current macro-diagnostic trends of forensic medicine in the Czech Republic].

    PubMed

    Frišhons, Jan; Kučerová, Štěpánka; Jurda, Mikoláš; Sokol, Miloš; Vojtíšek, Tomáš; Hejna, Petr

    2017-01-01

    Over the last few years, advanced diagnostic methods have penetrated in the realm of forensic medicine in addition to standard autopsy techniques supported by traditional X-ray examination and macro-diagnostic laboratory tests. Despite the progress of imaging methods, the conventional autopsy has remained basic and essential diagnostic tool in forensic medicine. Postmortem computed tomography and magnetic resonance imaging are far the most progressive modern radio diagnostic methods setting the current trend of virtual autopsies all over the world. Up to now, only two institutes of forensic medicine have available postmortem computed tomography for routine diagnostic purposes in the Czech Republic. Postmortem magnetic resonance is currently unattainable for routine diagnostic use and was employed only for experimental purposes. Photogrammetry is digital method focused primarily on body surface imaging. Recently, the most fruitful results have been yielded from the interdisciplinary cooperation between forensic medicine and forensic anthropology with the implementation of body scanning techniques and 3D printing. Non-invasive and mini-invasive investigative methods such as postmortem sonography and postmortem endoscopy was unsystematically tested for diagnostic performance with good outcomes despite of limitations of these methods in postmortem application. Other futuristic methods, such as the use of a drone to inspect the crime scene are still experimental tools. The authors of the article present a basic overview of the both routinely and experimentally used investigative methods and current macro-diagnostic trends of the forensic medicine in the Czech Republic.

  5. Gear and seasonal bias associated with abundance and size structure estimates for lentic freshwater fishes

    USGS Publications Warehouse

    Fischer, Jesse R.; Quist, Michael C.

    2014-01-01

    All freshwater fish sampling methods are biased toward particular species, sizes, and sexes and are further influenced by season, habitat, and fish behavior changes over time. However, little is known about gear-specific biases for many common fish species because few multiple-gear comparison studies exist that have incorporated seasonal dynamics. We sampled six lakes and impoundments representing a diversity of trophic and physical conditions in Iowa, USA, using multiple gear types (i.e., standard modified fyke net, mini-modified fyke net, sinking experimental gill net, bag seine, benthic trawl, boat-mounted electrofisher used diurnally and nocturnally) to determine the influence of sampling methodology and season on fisheries assessments. Specifically, we describe the influence of season on catch per unit effort, proportional size distribution, and the number of samples required to obtain 125 stock-length individuals for 12 species of recreational and ecological importance. Mean catch per unit effort generally peaked in the spring and fall as a result of increased sampling effectiveness in shallow areas and seasonal changes in habitat use (e.g., movement offshore during summer). Mean proportional size distribution decreased from spring to fall for white bass Morone chrysops, largemouth bass Micropterus salmoides, bluegill Lepomis macrochirus, and black crappie Pomoxis nigromaculatus, suggesting selectivity for large and presumably sexually mature individuals in the spring and summer. Overall, the mean number of samples required to sample 125 stock-length individuals was minimized in the fall with sinking experimental gill nets, a boat-mounted electrofisher used at night, and standard modified nets for 11 of the 12 species evaluated. Our results provide fisheries scientists with relative comparisons between several recommended standard sampling methods and illustrate the effects of seasonal variation on estimates of population indices that will be critical to the future development of standardized sampling methods for freshwater fish in lentic ecosystems.

  6. Characterization of pore structure in cement-based materials using pressurization-depressurization cycling mercury intrusion porosimetry (PDC-MIP)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou Jian, E-mail: Jian.Zhou@tudelft.n; Ye Guang, E-mail: g.ye@tudelft.n; Magnel Laboratory for Concrete Research, Department of Structural Engineering, Ghent University, Technologiepark-Zwijnaarde 904 B-9052, Ghent

    2010-07-15

    Numerous mercury intrusion porosimetry (MIP) studies have been carried out to investigate the pore structure in cement-based materials. However, the standard MIP often results in an underestimation of large pores and an overestimation of small pores because of its intrinsic limitation. In this paper, an innovative MIP method is developed in order to provide a more accurate estimation of pore size distribution. The new MIP measurements are conducted following a unique mercury intrusion procedure, in which the applied pressure is increased from the minimum to the maximum by repeating pressurization-depressurization cycles instead of a continuous pressurization followed by a continuousmore » depressurization. Accordingly, this method is called pressurization-depressurization cycling MIP (PDC-MIP). By following the PDC-MIP testing sequence, the volumes of the throat pores and the corresponding ink-bottle pores can be determined at every pore size. These values are used to calculate pore size distribution by using the newly developed analysis method. This paper presents an application of PDC-MIP on the investigation of the pore size distribution in cement-based materials. The experimental results of PDC-MIP are compared with those measured by standard MIP. The PDC-MIP is further validated with the other experimental methods and numerical tool, including nitrogen sorption, backscanning electron (BSE) image analysis, Wood's metal intrusion porosimetry (WMIP) and the numerical simulation by the cement hydration model HYMOSTRUC3D.« less

  7. A new methodology for hydro-abrasive erosion tests simulating penstock erosive flow

    NASA Astrophysics Data System (ADS)

    Aumelas, V.; Maj, G.; Le Calvé, P.; Smith, M.; Gambiez, B.; Mourrat, X.

    2016-11-01

    Hydro-abrasive resistance is an important property requirement for hydroelectric power plant penstock coating systems used by EDF. The selection of durable coating systems requires an experimental characterization of coating performance. This can be achieved by performing accelerated and representative laboratory tests. In case of severe erosion induced by a penstock flow, there is no suitable method or standard representative of real erosive flow conditions. The presented study aims at developing a new methodology and an associated laboratory experimental device. The objective of the laboratory apparatus is to subject coated test specimens to wear conditions similar to the ones generated at the penstock lower generatrix in actual flow conditions. Thirteen preselected coating solutions were first been tested during a 45 hours erosion test. A ranking of the thirteen coating solutions was then determined after characterisation. To complete this first evaluation and to determine the wear kinetic of the four best coating solutions, additional erosion tests were conducted with a longer duration of 216 hours. A comparison of this new method with standardized tests and with real service operating flow conditions is also discussed. To complete the final ranking based on hydro-abrasive erosion tests, some trial tests were carried out on penstock samples to check the application method of selected coating systems. The paper gives some perspectives related to erosion test methodologies for materials and coating solutions for hydraulic applications. The developed test method can also be applied in other fields.

  8. A standard methodology for the analysis, recording, and control of verbal behavior

    PubMed Central

    Drash, Philip W.; Tudor, Roger M.

    1991-01-01

    Lack of a standard methodology has been one of the major obstacles preventing advancement of behavior analytic research in verbal behavior. This article presents a standard method for the analysis, recording, and control of verbal behavior that overcomes several major methodological problems that have hindered operant research in verbal behavior. The system divides all verbal behavior into four functional response classes, correct, error, no response, and inappropriate behavior, from which all vocal responses of a subject may be classified and consequated. The effects of contingencies of reinforcement on verbal operants within each category are made immediately visible to the researcher as changes in frequency of response. Incorporating frequency of response within each category as the unit of response allows both rate and probability of verbal response to be utilized as basic dependent variables. This method makes it possible to record and consequate verbal behavior in essentially the same way as any other operant response. It may also facilitate an experimental investigation of Skinner's verbal response categories. PMID:22477629

  9. Comparison of two methods to determine fan performance curves using computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Onma, Patinya; Chantrasmi, Tonkid

    2018-01-01

    This work investigates a systematic numerical approach that employs Computational Fluid Dynamics (CFD) to obtain performance curves of a backward-curved centrifugal fan. Generating the performance curves requires a number of three-dimensional simulations with varying system loads at a fixed rotational speed. Two methods were used and their results compared to experimental data. The first method incrementally changes the mass flow late through the inlet boundary condition while the second method utilizes a series of meshes representing the physical damper blade at various angles. The generated performance curves from both methods are compared with an experiment setup in accordance with the AMCA fan performance testing standard.

  10. Persons Camp Using Interpolation Method

    NASA Astrophysics Data System (ADS)

    Tawfiq, Luma Naji Mohammed; Najm Abood, Israa

    2018-05-01

    The aim of this paper is to estimate the rate of contaminated soils by using suitable interpolation method as an alternative accurate tool to evaluate the concentration of heavy metals in soil then compared with standard universal value to determine the rate of contamination in the soil. In particular, interpolation methods are extensively applied in the models of the different phenomena where experimental data must be used in computer studies where expressions of those data are required. In this paper the extended divided difference method in two dimensions is used to solve suggested problem. Then, the modification method is applied to estimate the rate of contaminated soils of displaced persons camp in Diyala Governorate, in Iraq.

  11. Detection technology research on the one-way clutch of automatic brake adjuster

    NASA Astrophysics Data System (ADS)

    Jiang, Wensong; Luo, Zai; Lu, Yi

    2013-10-01

    In this article, we provide a new testing method to evaluate the acceptable quality of the one-way clutch of automatic brake adjuster. To analysis the suitable adjusting brake moment which keeps the automatic brake adjuster out of failure, we build a mechanical model of one-way clutch according to the structure and the working principle of one-way clutch. The ranges of adjusting brake moment both clockwise and anti-clockwise can be calculated through the mechanical model of one-way clutch. Its critical moment, as well, are picked up as the ideal values of adjusting brake moment to evaluate the acceptable quality of one-way clutch of automatic brake adjuster. we calculate the ideal values of critical moment depending on the different structure of one-way clutch based on its mechanical model before the adjusting brake moment test begin. In addition, an experimental apparatus, which the uncertainty of measurement is ±0.1Nm, is specially designed to test the adjusting brake moment both clockwise and anti-clockwise. Than we can judge the acceptable quality of one-way clutch of automatic brake adjuster by comparing the test results and the ideal values instead of the EXP. In fact, the evaluation standard of adjusting brake moment applied on the project are still using the EXP provided by manufacturer currently in China, but it would be unavailable when the material of one-way clutch changed. Five kinds of automatic brake adjusters are used in the verification experiment to verify the accuracy of the test method. The experimental results show that the experimental values of adjusting brake moment both clockwise and anti-clockwise are within the ranges of theoretical results. The testing method provided by this article vividly meet the requirements of manufacturer's standard.

  12. Development of a primary standard for absorbed dose from unsealed radionuclide solutions

    NASA Astrophysics Data System (ADS)

    Billas, I.; Shipley, D.; Galer, S.; Bass, G.; Sander, T.; Fenwick, A.; Smyth, V.

    2016-12-01

    Currently, the determination of the internal absorbed dose to tissue from an administered radionuclide solution relies on Monte Carlo (MC) calculations based on published nuclear decay data, such as emission probabilities and energies. In order to validate these methods with measurements, it is necessary to achieve the required traceability of the internal absorbed dose measurements of a radionuclide solution to a primary standard of absorbed dose. The purpose of this work was to develop a suitable primary standard. A comparison between measurements and calculations of absorbed dose allows the validation of the internal radiation dose assessment methods. The absorbed dose from an yttrium-90 chloride (90YCl) solution was measured with an extrapolation chamber. A phantom was developed at the National Physical Laboratory (NPL), the UK’s National Measurement Institute, to position the extrapolation chamber as closely as possible to the surface of the solution. The performance of the extrapolation chamber was characterised and a full uncertainty budget for the absorbed dose determination was obtained. Absorbed dose to air in the collecting volume of the chamber was converted to absorbed dose at the centre of the radionuclide solution by applying a MC calculated correction factor. This allowed a direct comparison of the analytically calculated and experimentally determined absorbed dose of an 90YCl solution. The relative standard uncertainty in the measurement of absorbed dose at the centre of an 90YCl solution with the extrapolation chamber was found to be 1.6% (k  =  1). The calculated 90Y absorbed doses from published medical internal radiation dose (MIRD) and radiation dose assessment resource (RADAR) data agreed with measurements to within 1.5% and 1.4%, respectively. This study has shown that it is feasible to use an extrapolation chamber for performing primary standard absorbed dose measurements of an unsealed radionuclide solution. Internal radiation dose assessment methods based on MIRD and RADAR data for 90Y have been validated with experimental absorbed dose determination and they agree within the stated expanded uncertainty (k  =  2).

  13. [Cancer nursing care education programs: the effectiveness of different teaching methods].

    PubMed

    Cheng, Yun-Ju; Kao, Yu-Hsiu

    2012-10-01

    In-service education affects the quality of cancer care directly. Using classroom teaching to deliver in-service education is often ineffective due to participants' large workload and shift requirements. This study evaluated the learning effectiveness of different teaching methods in the dimensions of knowledge, attitude, and learning satisfaction. This study used a quasi-experimental study design. Participants were cancer ward nurses working at one medical center in northern Taiwan. Participants were divided into an experimental group and control group. The experimental group took an e-learning course and the control group took a standard classroom course using the same basic course material. Researchers evaluated the learning efficacy of each group using a questionnaire based on the quality of cancer nursing care learning effectiveness scale. All participants answered the questionnaire once before and once after completing the course. (1) Post-test "knowledge" scores for both groups were significantly higher than pre-test scores for both groups. Post-test "attitude" scores were significantly higher for the control group, while the experimental group reported no significant change. (2) after a covariance analysis of the pre-test scores for both groups, the post-test score for the experimental group was significantly lower than the control group in the knowledge dimension. Post-test scores did not differ significantly from pre-test scores for either group in the attitude dimension. (3) Post-test satisfaction scores between the two groups did not differ significantly with regard to teaching methods. The e-learning method, however, was demonstrated as more flexible than the classroom teaching method. Study results demonstrate the importance of employing a variety of teaching methods to instruct clinical nursing staff. We suggest that both classroom teaching and e-learning instruction methods be used to enhance the quality of cancer nursing care education programs. We also encourage that interactivity between student and instructor be incorporated into e-learning course designs to enhance effectiveness.

  14. Calculation of the Intensity of Physical Time Fluctuations Using the Standard Solar Model and its Comparison with the Results of Experimental Measurements

    NASA Astrophysics Data System (ADS)

    Morozov, A. N.

    2017-11-01

    The article reviews the possibility of describing physical time as a random Poisson process. An equation allowing the intensity of physical time fluctuations to be calculated depending on the entropy production density within irreversible natural processes has been proposed. Based on the standard solar model the work calculates the entropy production density inside the Sun and the dependence of the intensity of physical time fluctuations on the distance to the centre of the Sun. A free model parameter has been established, and the method of its evaluation has been suggested. The calculations of the entropy production density inside the Sun showed that it differs by 2-3 orders of magnitude in different parts of the Sun. The intensity of physical time fluctuations on the Earth's surface depending on the entropy production density during the sunlight-to-Earth's thermal radiation conversion has been theoretically predicted. A method of evaluation of the Kullback's measure of voltage fluctuations in small amounts of electrolyte has been proposed. Using a simple model of the Earth's surface heat transfer to the upper atmosphere, the effective Earth's thermal radiation temperature has been determined. A comparison between the theoretical values of the Kullback's measure derived from the fluctuating physical time model and the experimentally measured values of this measure for two independent electrolytic cells showed a good qualitative and quantitative concurrence of predictions of both theoretical model and experimental data.

  15. Absolute Standard Hydrogen Electrode Potential Measured by Reduction of Aqueous Nanodrops in the Gas Phase

    PubMed Central

    Donald, William A.; Leib, Ryan D.; O'Brien, Jeremy T.; Bush, Matthew F.; Williams, Evan R.

    2008-01-01

    In solution, half-cell potentials are measured relative to those of other half cells, thereby establishing a ladder of thermochemical values that are referenced to the standard hydrogen electrode (SHE), which is arbitrarily assigned a value of exactly 0 V. Although there has been considerable interest in, and efforts toward, establishing an absolute electrochemical half-cell potential in solution, there is no general consensus regarding the best approach to obtain this value. Here, ion-electron recombination energies resulting from electron capture by gas-phase nanodrops containing individual [M(NH3)6]3+, M = Ru, Co, Os, Cr, and Ir, and Cu2+ ions are obtained from the number of water molecules that are lost from the reduced precursors. These experimental data combined with nanodrop solvation energies estimated from Born theory and solution-phase entropies estimated from limited experimental data provide absolute reduction energies for these redox couples in bulk aqueous solution. A key advantage of this approach is that solvent effects well past two solvent shells, that are difficult to model accurately, are included in these experimental measurements. By evaluating these data relative to known solution-phase reduction potentials, an absolute value for the SHE of 4.2 ± 0.4 V versus a free electron is obtained. Although not achieved here, the uncertainty of this method could potentially be reduced to below 0.1 V, making this an attractive method for establishing an absolute electrochemical scale that bridges solution and gas-phase redox chemistry. PMID:18288835

  16. Determination of B-complex vitamins in pharmaceutical formulations by surface-enhanced Raman spectroscopy

    NASA Astrophysics Data System (ADS)

    Junior, Benedito Roberto Alvarenga; Soares, Frederico Luis Felipe; Ardila, Jorge Armando; Durango, Luis Guillermo Cuadrado; Forim, Moacir Rossi; Carneiro, Renato Lajarim

    2018-01-01

    The aim of this work was to quantify B-complex vitamins in pharmaceutical samples by surface enhanced Raman spectroscopy technique using gold colloid substrate. Synthesis of gold nanoparticles was performed according to an adapted Turkevich method. Initial essays were able to suggest the orientation of molecules on gold nanoparticles surface. Central Composite design was performed to obtain the highest SERS signal for nicotinamide and riboflavin. The evaluated parameters in the experimental design were volume of AuNPs, concentration of vitamins and sodium chloride concentration. The best condition for nicotinamide was NaCl 2.3 × 10- 3 mol L- 1 and 700 μL of AuNPs colloid and this same condition showed to be adequate to quantify thiamine. The experimental design for riboflavin shows the best condition at NaCl 1.15 × 10- 2 mol L- 1 and 2.8 mL of AuNPs colloid. It was possible to quantify thiamine and nicotinamide in presence of others vitamins and excipients in two solid multivitamin formulations using the standard addition procedure. The standard addition curve presented a R2 higher than 0.96 for both nicotinamide and thiamine, at orders of magnitude 10- 7 and 10- 8 mol L- 1, respectively. The nicotinamide content in a cosmetic gel sample was also quantified by direct analysis presenting R2 0.98. The t-student test presented no significant difference regarding HPLC method. Despite the experimental design performed for riboflavin, it was not possible its quantification in the commercial samples.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salvador Palau, A.; Eder, S. D., E-mail: sabrina.eder@uib.no; Kaltenbacher, T.

    Time-of-flight (TOF) is a standard experimental technique for determining, among others, the speed ratio S (velocity spread) of a molecular beam. The speed ratio is a measure for the monochromaticity of the beam and an accurate determination of S is crucial for various applications, for example, for characterising chromatic aberrations in focussing experiments related to helium microscopy or for precise measurements of surface phonons and surface structures in molecular beam scattering experiments. For both of these applications, it is desirable to have as high a speed ratio as possible. Molecular beam TOF measurements are typically performed by chopping the beammore » using a rotating chopper with one or more slit openings. The TOF spectra are evaluated using a standard deconvolution method. However, for higher speed ratios, this method is very sensitive to errors related to the determination of the slit width and the beam diameter. The exact sensitivity depends on the beam diameter, the number of slits, the chopper radius, and the chopper rotation frequency. We present a modified method suitable for the evaluation of TOF measurements of high speed ratio beams. The modified method is based on a systematic variation of the chopper convolution parameters so that a set of independent measurements that can be fitted with an appropriate function are obtained. We show that with this modified method, it is possible to reduce the error by typically one order of magnitude compared to the standard method.« less

  18. Calculation of Five Thermodynamic Molecular Descriptors by Means of a General Computer Algorithm Based on the Group-Additivity Method: Standard Enthalpies of Vaporization, Sublimation and Solvation, and Entropy of Fusion of Ordinary Organic Molecules and Total Phase-Change Entropy of Liquid Crystals.

    PubMed

    Naef, Rudolf; Acree, William E

    2017-06-25

    The calculation of the standard enthalpies of vaporization, sublimation and solvation of organic molecules is presented using a common computer algorithm on the basis of a group-additivity method. The same algorithm is also shown to enable the calculation of their entropy of fusion as well as the total phase-change entropy of liquid crystals. The present method is based on the complete breakdown of the molecules into their constituting atoms and their immediate neighbourhood; the respective calculations of the contribution of the atomic groups by means of the Gauss-Seidel fitting method is based on experimental data collected from literature. The feasibility of the calculations for each of the mentioned descriptors was verified by means of a 10-fold cross-validation procedure proving the good to high quality of the predicted values for the three mentioned enthalpies and for the entropy of fusion, whereas the predictive quality for the total phase-change entropy of liquid crystals was poor. The goodness of fit ( Q ²) and the standard deviation (σ) of the cross-validation calculations for the five descriptors was as follows: 0.9641 and 4.56 kJ/mol ( N = 3386 test molecules) for the enthalpy of vaporization, 0.8657 and 11.39 kJ/mol ( N = 1791) for the enthalpy of sublimation, 0.9546 and 4.34 kJ/mol ( N = 373) for the enthalpy of solvation, 0.8727 and 17.93 J/mol/K ( N = 2637) for the entropy of fusion and 0.5804 and 32.79 J/mol/K ( N = 2643) for the total phase-change entropy of liquid crystals. The large discrepancy between the results of the two closely related entropies is discussed in detail. Molecules for which both the standard enthalpies of vaporization and sublimation were calculable, enabled the estimation of their standard enthalpy of fusion by simple subtraction of the former from the latter enthalpy. For 990 of them the experimental enthalpy-of-fusion values are also known, allowing their comparison with predictions, yielding a correlation coefficient R ² of 0.6066.

  19. Automated annotation of functional imaging experiments via multi-label classification

    PubMed Central

    Turner, Matthew D.; Chakrabarti, Chayan; Jones, Thomas B.; Xu, Jiawei F.; Fox, Peter T.; Luger, George F.; Laird, Angela R.; Turner, Jessica A.

    2013-01-01

    Identifying the experimental methods in human neuroimaging papers is important for grouping meaningfully similar experiments for meta-analyses. Currently, this can only be done by human readers. We present the performance of common machine learning (text mining) methods applied to the problem of automatically classifying or labeling this literature. Labeling terms are from the Cognitive Paradigm Ontology (CogPO), the text corpora are abstracts of published functional neuroimaging papers, and the methods use the performance of a human expert as training data. We aim to replicate the expert's annotation of multiple labels per abstract identifying the experimental stimuli, cognitive paradigms, response types, and other relevant dimensions of the experiments. We use several standard machine learning methods: naive Bayes (NB), k-nearest neighbor, and support vector machines (specifically SMO or sequential minimal optimization). Exact match performance ranged from only 15% in the worst cases to 78% in the best cases. NB methods combined with binary relevance transformations performed strongly and were robust to overfitting. This collection of results demonstrates what can be achieved with off-the-shelf software components and little to no pre-processing of raw text. PMID:24409112

  20. A novel quantitation approach for maximizing detectable targets for offensive/volatile odorants with diverse functional groups by thermal desorption-gas chromatography-mass spectrometry

    NASA Astrophysics Data System (ADS)

    Kim, Yong-Hyun; Kim, Ki-Hyun

    2016-07-01

    A multitude of analytical systems are needed to analyze diverse odorants with various functionalities. In this study, an experimental method was developed to assess the maximum covering range of odorants using a single experimental setup consisting of a thermal desorber-gas chromatography-mass spectrometry system. To this end, a total of 20 offensive odorants (aldehyde, ketone, ester, alcohol, aromatic, sulfide, amine, and carboxyl) were selected and tested by a single system. The analytical results of standards and environmental samples were evaluated in a number of respects. In the analysis of the standards, all targets were quantified via Carbopack (C + B + X) tube sampling while operating the thermal desorber at -25 °C. The method detection limits of 18 targets (exception of 2 out of the 20 targets: acetaldehyde and methanethiol) were excellent (mean 0.04 ± 0.03 ppb) in terms of their odor threshold values (74.7 ± 140 ~ 624 ± 1,729 ppb). The analysis of organic fertilizer plant samples at a pig farm (slurry treatment facility, compost facility, and ambient air) confirmed the presence of 18 odorants from 0.03 ppb (dimethyldisulfide, ambient sample) to 522 ppb (methyl ethyl ketone, slurry treatment facility). As such, our method allowed simultaneous quantitation of most key odorants with sufficient reliability and sensitivity.

  1. Application of advanced shearing techniques to the calibration of autocollimators with small angle generators and investigation of error sources.

    PubMed

    Yandayan, T; Geckeler, R D; Aksulu, M; Akgoz, S A; Ozgur, B

    2016-05-01

    The application of advanced error-separating shearing techniques to the precise calibration of autocollimators with Small Angle Generators (SAGs) was carried out for the first time. The experimental realization was achieved using the High Precision Small Angle Generator (HPSAG) of TUBITAK UME under classical dimensional metrology laboratory environmental conditions. The standard uncertainty value of 5 mas (24.2 nrad) reached by classical calibration method was improved to the level of 1.38 mas (6.7 nrad). Shearing techniques, which offer a unique opportunity to separate the errors of devices without recourse to any external standard, were first adapted by Physikalisch-Technische Bundesanstalt (PTB) to the calibration of autocollimators with angle encoders. It has been demonstrated experimentally in a clean room environment using the primary angle standard of PTB (WMT 220). The application of the technique to a different type of angle measurement system extends the range of the shearing technique further and reveals other advantages. For example, the angular scales of the SAGs are based on linear measurement systems (e.g., capacitive nanosensors for the HPSAG). Therefore, SAGs show different systematic errors when compared to angle encoders. In addition to the error-separation of HPSAG and the autocollimator, detailed investigations on error sources were carried out. Apart from determination of the systematic errors of the capacitive sensor used in the HPSAG, it was also demonstrated that the shearing method enables the unique opportunity to characterize other error sources such as errors due to temperature drift in long term measurements. This proves that the shearing technique is a very powerful method for investigating angle measuring systems, for their improvement, and for specifying precautions to be taken during the measurements.

  2. Normalized Quantitative Western Blotting Based on Standardized Fluorescent Labeling.

    PubMed

    Faden, Frederik; Eschen-Lippold, Lennart; Dissmeyer, Nico

    2016-01-01

    Western blot (WB) analysis is the most widely used method to monitor expression of proteins of interest in protein extracts of high complexity derived from diverse experimental setups. WB allows the rapid and specific detection of a target protein, such as non-tagged endogenous proteins as well as protein-epitope tag fusions depending on the availability of specific antibodies. To generate quantitative data from independent samples within one experiment and to allow accurate inter-experimental quantification, a reliable and reproducible method to standardize and normalize WB data is indispensable. To date, it is a standard procedure to normalize individual bands of immunodetected proteins of interest from a WB lane to other individual bands of so-called housekeeping proteins of the same sample lane. These are usually detected by an independent antibody or colorimetric detection and do not reflect the real total protein of a sample. Housekeeping proteins-assumed to be constitutively expressed mostly independent of developmental and environmental states-can greatly differ in their expression under these various conditions. Therefore, they actually do not represent a reliable reference to normalize the target protein's abundance to the total amount of protein contained in each lane of a blot.Here, we demonstrate the Smart Protein Layers (SPL) technology, a combination of fluorescent standards and a stain-free fluorescence-based visualization of total protein in gels and after transfer via WB. SPL allows a rapid and highly sensitive protein visualization and quantification with a sensitivity comparable to conventional silver staining with a 1000-fold higher dynamic range. For normalization, standardization and quantification of protein gels and WBs, a sample-dependent bi-fluorescent standard reagent is applied and, for accurate quantification of data derived from different experiments, a second calibration standard is used. Together, the precise quantification of protein expression by lane-to-lane, gel-to-gel, and blot-to-blot comparisons is facilitated especially with respect to experiments in the area of proteostasis dealing with highly variable protein levels and involving protein degradation mutants and treatments modulating protein abundance.

  3. Determination of a tissue-level failure evaluation standard for rat femoral cortical bone utilizing a hybrid computational-experimental method.

    PubMed

    Fan, Ruoxun; Liu, Jie; Jia, Zhengbin; Deng, Ying; Liu, Jun

    2018-01-01

    Macro-level failure in bone structure could be diagnosed by pain or physical examination. However, diagnosing tissue-level failure in a timely manner is challenging due to the difficulty in observing the interior mechanical environment of bone tissue. Because most fractures begin with tissue-level failure in bone tissue caused by continually applied loading, people attempt to monitor the tissue-level failure of bone and provide corresponding measures to prevent fracture. Many tissue-level mechanical parameters of bone could be predicted or measured; however, the value of the parameter may vary among different specimens belonging to a kind of bone structure even at the same age and anatomical site. These variations cause difficulty in representing tissue-level bone failure. Therefore, determining an appropriate tissue-level failure evaluation standard is necessary to represent tissue-level bone failure. In this study, the yield and failure processes of rat femoral cortical bones were primarily simulated through a hybrid computational-experimental method. Subsequently, the tissue-level strains and the ratio between tissue-level failure and yield strains in cortical bones were predicted. The results indicated that certain differences existed in tissue-level strains; however, slight variations in the ratio were observed among different cortical bones. Therefore, the ratio between tissue-level failure and yield strains for a kind of bone structure could be determined. This ratio may then be regarded as an appropriate tissue-level failure evaluation standard to represent the mechanical status of bone tissue.

  4. A Cost-Effective Transparency-Based Digital Imaging for Efficient and Accurate Wound Area Measurement

    PubMed Central

    Li, Pei-Nan; Li, Hong; Wu, Mo-Li; Wang, Shou-Yu; Kong, Qing-You; Zhang, Zhen; Sun, Yuan; Liu, Jia; Lv, De-Cheng

    2012-01-01

    Wound measurement is an objective and direct way to trace the course of wound healing and to evaluate therapeutic efficacy. Nevertheless, the accuracy and efficiency of the current measurement methods need to be improved. Taking the advantages of reliability of transparency tracing and the accuracy of computer-aided digital imaging, a transparency-based digital imaging approach is established, by which data from 340 wound tracing were collected from 6 experimental groups (8 rats/group) at 8 experimental time points (Day 1, 3, 5, 7, 10, 12, 14 and 16) and orderly archived onto a transparency model sheet. This sheet was scanned and its image was saved in JPG form. Since a set of standard area units from 1 mm2 to 1 cm2 was integrated into the sheet, the tracing areas in JPG image were measured directly, using the “Magnetic lasso tool” in Adobe Photoshop program. The pixel values/PVs of individual outlined regions were obtained and recorded in an average speed of 27 second/region. All PV data were saved in an excel form and their corresponding areas were calculated simultaneously by the formula of Y (PV of the outlined region)/X (PV of standard area unit) × Z (area of standard unit). It took a researcher less than 3 hours to finish area calculation of 340 regions. In contrast, over 3 hours were expended by three skillful researchers to accomplish the above work with traditional transparency-based method. Moreover, unlike the results obtained traditionally, little variation was found among the data calculated by different persons and the standard area units in different sizes and shapes. Given its accurate, reproductive and efficient properties, this transparency-based digital imaging approach would be of significant values in basic wound healing research and clinical practice. PMID:22666449

  5. Determining octanol-water partition coefficients for extremely hydrophobic chemicals by combining "slow stirring" and solid-phase microextraction.

    PubMed

    Jonker, Michiel T O

    2016-06-01

    Octanol-water partition coefficients (KOW ) are widely used in fate and effects modeling of chemicals. Still, high-quality experimental KOW data are scarce, in particular for very hydrophobic chemicals. This hampers reliable assessments of several fate and effect parameters and the development and validation of new models. One reason for the limited availability of experimental values may relate to the challenging nature of KOW measurements. In the present study, KOW values for 13 polycyclic aromatic hydrocarbons were determined with the gold standard "slow-stirring" method (log KOW 4.6-7.2). These values were then used as reference data for the development of an alternative method for measuring KOW . This approach combined slow stirring and equilibrium sampling of the extremely low aqueous concentrations with polydimethylsiloxane-coated solid-phase microextraction fibers, applying experimentally determined fiber-water partition coefficients. It resulted in KOW values matching the slow-stirring data very well. Therefore, the method was subsequently applied to a series of 17 moderately to extremely hydrophobic petrochemical compounds. The obtained KOW values spanned almost 6 orders of magnitude, with the highest value measuring 10(10.6) . The present study demonstrates that the hydrophobicity domain within which experimental KOW measurements are possible can be extended with the help of solid-phase microextraction and that experimentally determined KOW values can exceed the proposed upper limit of 10(9) . Environ Toxicol Chem 2016;35:1371-1377. © 2015 SETAC. © 2015 SETAC.

  6. How to design a single-cell RNA-sequencing experiment: pitfalls, challenges and perspectives.

    PubMed

    Dal Molin, Alessandra; Di Camillo, Barbara

    2018-01-31

    The sequencing of the transcriptome of single cells, or single-cell RNA-sequencing, has now become the dominant technology for the identification of novel cell types in heterogeneous cell populations or for the study of stochastic gene expression. In recent years, various experimental methods and computational tools for analysing single-cell RNA-sequencing data have been proposed. However, most of them are tailored to different experimental designs or biological questions, and in many cases, their performance has not been benchmarked yet, thus increasing the difficulty for a researcher to choose the optimal single-cell transcriptome sequencing (scRNA-seq) experiment and analysis workflow. In this review, we aim to provide an overview of the current available experimental and computational methods developed to handle single-cell RNA-sequencing data and, based on their peculiarities, we suggest possible analysis frameworks depending on specific experimental designs. Together, we propose an evaluation of challenges and open questions and future perspectives in the field. In particular, we go through the different steps of scRNA-seq experimental protocols such as cell isolation, messenger RNA capture, reverse transcription, amplification and use of quantitative standards such as spike-ins and Unique Molecular Identifiers (UMIs). We then analyse the current methodological challenges related to preprocessing, alignment, quantification, normalization, batch effect correction and methods to control for confounding effects. © The Author(s) 2018. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  7. [Effect of manual cleaning and machine cleaning for dental handpiece].

    PubMed

    Zhou, Xiaoli; Huang, Hao; He, Xiaoyan; Chen, Hui; Zhou, Xiaoying

    2013-08-01

    Comparing the dental handpiece' s cleaning effect between manual cleaning and machine cleaning. Eighty same contaminated dental handpieces were randomly divided into experimental group and control group, each group contains 40 pieces. The experimental group was treated by full automatic washing machine, and the control group was cleaned manually. The cleaning method was conducted according to the operations process standard, then ATP bioluminescence was used to test the cleaning results. Average relative light units (RLU) by ATP bioluminescence detection were as follows: Experimental group was 9, control group was 41. The two groups were less than the recommended RLU value provided by the instrument manufacturer (RLU < or = 45). There was significant difference between the two groups (P < 0.05). The cleaning quality of the experimental group was better than that of control group. It is recommended that the central sterile supply department should clean dental handpieces by machine to ensure the cleaning effect and maintain the quality.

  8. Comparison between Different Methods for Biomechanical Assessment of Ex Vivo Fracture Callus Stiffness in Small Animal Bone Healing Studies

    PubMed Central

    Steiner, Malte; Volkheimer, David; Meyers, Nicholaus; Wehner, Tim; Wilke, Hans-Joachim; Claes, Lutz; Ignatius, Anita

    2015-01-01

    For ex vivo measurements of fracture callus stiffness in small animals, different test methods, such as torsion or bending tests, are established. Each method provides advantages and disadvantages, and it is still debated which of those is most sensitive to experimental conditions (i.e. specimen alignment, directional dependency, asymmetric behavior). The aim of this study was to experimentally compare six different testing methods regarding their robustness against experimental errors. Therefore, standardized specimens were created by selective laser sintering (SLS), mimicking size, directional behavior, and embedding variations of respective rat long bone specimens. For the latter, five different geometries were created which show shifted or tilted specimen alignments. The mechanical tests included three-point bending, four-point bending, cantilever bending, axial compression, constrained torsion, and unconstrained torsion. All three different bending tests showed the same principal behavior. They were highly dependent on the rotational direction of the maximum fracture callus expansion relative to the loading direction (creating experimental errors of more than 60%), however small angular deviations (<15°) were negligible. Differences in the experimental results between the bending tests originate in their respective location of maximal bending moment induction. Compared to four-point bending, three-point bending is easier to apply on small rat and mouse bones under realistic testing conditions and yields robust measurements, provided low variation of the callus shape among the tested specimens. Axial compressive testing was highly sensitive to embedding variations, and therefore cannot be recommended. Although it is experimentally difficult to realize, unconstrained torsion testing was found to be the most robust method, since it was independent of both rotational alignment and embedding uncertainties. Constrained torsional testing showed small errors (up to 16.8%, compared to corresponding alignment under unconstrained torsion) due to a parallel offset between the specimens’ axis of gravity and the torsional axis of rotation. PMID:25781027

  9. Structural health monitoring in composite materials using frequency response methods

    NASA Astrophysics Data System (ADS)

    Kessler, Seth S.; Spearing, S. Mark; Atalla, Mauro J.; Cesnik, Carlos E. S.; Soutis, Constantinos

    2001-08-01

    Cost effective and reliable damage detection is critical for the utilization of composite materials in structural applications. Non-destructive evaluation techniques (e.g. ultrasound, radiography, infra-red imaging) are available for use during standard repair and maintenance cycles, however by comparison to the techniques used for metals these are relatively expensive and time consuming. This paper presents part of an experimental and analytical survey of candidate methods for the detection of damage in composite materials. The experimental results are presented for the application of modal analysis techniques applied to rectangular laminated graphite/epoxy specimens containing representative damage modes, including delamination, transverse ply cracks and through-holes. Changes in natural frequencies and modes were then found using a scanning laser vibrometer, and 2-D finite element models were created for comparison with the experimental results. The models accurately predicted the response of the specimems at low frequencies, but the local excitation and coalescence of higher frequency modes make mode-dependent damage detection difficult and most likely impractical for structural applications. The frequency response method was found to be reliable for detecting even small amounts of damage in a simple composite structure, however the potentially important information about damage type, size, location and orientation were lost using this method since several combinations of these variables can yield identical response signatures.

  10. Experimental physical methods and theories--then and now.

    PubMed

    Schulte, Jurgen

    2015-10-01

    A first evaluation of fundamental research into the physics and physiology of Ultra high dilutions (UHDs) was conducted by the author in 1994(1). In this paper we revisit methods and theories from back then and follow their paths through their evolution and contribution to new knowledge in UHD research since then. Physical methods and theories discusses in our anthology on UHD in 1994(1) form the basis for tracing ideas and findings along their path of further development and impact on new knowledge in UHD. Experimental approaches to probe physical changes in homeopathic preparations have become more sophisticated over past two decades, so did the desire to report results to a scientific standard that is on par with those in specialist literature. The same cannot be said about underlying supporting theoretical models and simulations. Grant challenges in science often take a more targeted and more concerted approach to formulate a research question and then look for answers. A concerted effort to focus on one hypothesized physical aspect of a well-defined homeopathic preparation may help aligning experimental methods with theoretical models and, in doing so, help to gain a deeper understanding of the whole body of insights and data produced. Copyright © 2015 The Faculty of Homeopathy. Published by Elsevier Ltd. All rights reserved.

  11. Pulse retrieval algorithm for interferometric frequency-resolved optical gating based on differential evolution.

    PubMed

    Hyyti, Janne; Escoto, Esmerando; Steinmeyer, Günter

    2017-10-01

    A novel algorithm for the ultrashort laser pulse characterization method of interferometric frequency-resolved optical gating (iFROG) is presented. Based on a genetic method, namely, differential evolution, the algorithm can exploit all available information of an iFROG measurement to retrieve the complex electric field of a pulse. The retrieval is subjected to a series of numerical tests to prove the robustness of the algorithm against experimental artifacts and noise. These tests show that the integrated error-correction mechanisms of the iFROG method can be successfully used to remove the effect from timing errors and spectrally varying efficiency in the detection. Moreover, the accuracy and noise resilience of the new algorithm are shown to outperform retrieval based on the generalized projections algorithm, which is widely used as the standard method in FROG retrieval. The differential evolution algorithm is further validated with experimental data, measured with unamplified three-cycle pulses from a mode-locked Ti:sapphire laser. Additionally introducing group delay dispersion in the beam path, the retrieval results show excellent agreement with independent measurements with a commercial pulse measurement device based on spectral phase interferometry for direct electric-field retrieval. Further experimental tests with strongly attenuated pulses indicate resilience of differential-evolution-based retrieval against massive measurement noise.

  12. Experimental Study of Water Transport through Hydrophilic Nanochannels

    NASA Astrophysics Data System (ADS)

    Alibakhshi, Mohammad Amin; Xie, Quan; Li, Yinxiao; Duan, Chuanhua

    2015-11-01

    In this paper, we investigate one of the fundamental aspects of Nanofluidics, which is the experimental study of water transport through nanoscale hydrophilic conduits. A new method based on spontaneous filling and a novel hybrid nanochannel design is developed to measure the pure mass flow resistance of single nanofluidic channels/tubes. This method does not require any pressure and flow sensors and also does not rely on any theoretical estimations, holding the potential to be standards for nanofluidic flow characterization. We have used this method to measure the pure mass flow resistance of single 2-D hydrophilic silica nanochannels with heights down to 7 nm. Our experimental results quantify the increased mass flow resistance as a function of nanochannel height, showing a 45% increase for a 7nm channel compared with classical hydrodynamics, and suggest that the increased resistance is possibly due to formation of a 7-angstrom-thick stagnant hydration layer on the hydrophilic surfaces. It has been further shown that this method can reliably measure a wide range of pure mass flow resistances of nanoscale conduits, and thus is promising for advancing studies of liquid transport in hydrophobic graphene nanochannels, CNTs, as well as nanoporous media. The work is supported by the American Chemical Society Petroleum Research Fund (ACS PRF # 54118-DNI7) and the Faculty Startup Fund (Boston University, USA).

  13. Experimental Study of Axially Tension Cold Formed Steel Channel Members

    NASA Astrophysics Data System (ADS)

    Apriani, Widya; Lubis, Fadrizal; Angraini, Muthia

    2017-12-01

    Experimental testing is commonly used as one of the steps to determine the cause of the collapse of a building structure. The collapse of structures can be due to low quality materials. Although material samples have passed laboratory tests and the existing technical specifications have been met but there may be undetected defects and known material after failure. In this paper will be presented Experimental Testing of Axially Tension Cold Formed Steel Channel Members to determine the cause of the collapse of a building roof truss x in Pekanbaru. Test of tensile strength material cold formed channel sections was performed to obtain the main characteristics of Cold Formed steel material, namely ultimate tensile strength loads that can be held by members and the yield stress possessed by channel sections used in construction. Analysis of axially tension cold formed steel channel section presents in this paper was conducted through experimental study based on specificationsAnnualBook of ASTM Standards: Metal Test methods and Analitical Procedures, Section 3 (1991). The result of capacity loads experimental test was compared with design based on SNI 03-7971-2013standard of Indonesia for the design of cold formed steel structural members. The results of the yield stress of the material will be seen against the minimum allowable allowable stress range. After the test, the percentace of ultimate axial tension capacity theory has a result that is 16.46% larger than the ultimate axial tension capacity experimental. When compared with the load that must be borne 5.673 kN/m it can be concluded that 2 specimens do not meet. Yield stress of member has fulfilled requirement that wass bigger than 550 MPa. Based on the curve obtained ultimate axial tension capacity theory, results greater than experimental. The greatest voltage value (fu) is achieved under the same conditions as its yield stress. For this specimen with a melting voltage value fy = 571.5068 MPa has fulfilled the minimum melting point value of 550 MPa required for standard mild steel materials in accordance with the code SNI 03-7971-2013 about Cold formed steel.

  14. Experimental Study of Axially Tension Cold Formed Steel Channel Members

    NASA Astrophysics Data System (ADS)

    Apriani, Widya; Lubis, Fadrizal; Angraini, Muthia

    2017-12-01

    Experimental testing is commonly used as one of the steps to determine the cause of the collapse of a building structure. The collapse of structures can be due to low quality materials. Although material samples have passed laboratory tests and the existing technical specifications have been met but there may be undetected defects and known material after failure. In this paper will be presented Experimental Testing of Axially Tension Cold Formed Steel Channel Members to determine the cause of the collapse of a building roof truss x in Pekanbaru. Test of tensile strength material cold formed channel sections was performed to obtain the main characteristics of Cold Formed steel material, namely ultimate tensile strength loads that can be held by members and the yield stress possessed by channel sections used in construction. Analysis of axially tension cold formed steel channel section presents in this paper was conducted through experimental study based on specificationsAnnualBook of ASTM Standards: Metal Test methods and Analitical Procedures, Section 3 (1991). The result of capacity loads experimental test was compared with design based on SNI 03-7971- 2013standard of Indonesia for the design of cold formed steel structural members. The results of the yield stress of the material will be seen against the minimum allowable allowable stress range. After the test, the percentace of ultimate axial tension capacity theory has a result that is 16.46% larger than the ultimate axial tension capacity experimental. When compared with the load that must be borne 5.673 kN/m it can be concluded that 2 specimens do not meet. Yield stress of member has fulfilled requirement that wass bigger than 550 MPa. Based on the curve obtained ultimate axial tension capacity theory, results greater than experimental. The greatest voltage value (fu) is achieved under the same conditions as its yield stress. For this specimen with a melting voltage value fy = 571.5068 MPa has fulfilled the minimum melting point value of 550 MPa required for standard mild steel materials in accordance with the code SNI 03- 7971-2013 about Cold formed steel.

  15. Implicit integration methods for dislocation dynamics

    DOE PAGES

    Gardner, D. J.; Woodward, C. S.; Reynolds, D. R.; ...

    2015-01-20

    In dislocation dynamics simulations, strain hardening simulations require integrating stiff systems of ordinary differential equations in time with expensive force calculations, discontinuous topological events, and rapidly changing problem size. Current solvers in use often result in small time steps and long simulation times. Faster solvers may help dislocation dynamics simulations accumulate plastic strains at strain rates comparable to experimental observations. Here, this paper investigates the viability of high order implicit time integrators and robust nonlinear solvers to reduce simulation run times while maintaining the accuracy of the computed solution. In particular, implicit Runge-Kutta time integrators are explored as a waymore » of providing greater accuracy over a larger time step than is typically done with the standard second-order trapezoidal method. In addition, both accelerated fixed point and Newton's method are investigated to provide fast and effective solves for the nonlinear systems that must be resolved within each time step. Results show that integrators of third order are the most effective, while accelerated fixed point and Newton's method both improve solver performance over the standard fixed point method used for the solution of the nonlinear systems.« less

  16. Monitoring the metering performance of an electronic voltage transformer on-line based on cyber-physics correlation analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Zhu; Li, Hongbin; Tang, Dengping; Hu, Chen; Jiao, Yang

    2017-10-01

    Metering performance is the key parameter of an electronic voltage transformer (EVT), and it requires high accuracy. The conventional off-line calibration method using a standard voltage transformer is not suitable for the key equipment in a smart substation, which needs on-line monitoring. In this article, we propose a method for monitoring the metering performance of an EVT on-line based on cyber-physics correlation analysis. By the electrical and physical properties of a substation running in three-phase symmetry, the principal component analysis method is used to separate the metering deviation caused by the primary fluctuation and the EVT anomaly. The characteristic statistics of the measured data during operation are extracted, and the metering performance of the EVT is evaluated by analyzing the change in statistics. The experimental results show that the method successfully monitors the metering deviation of a Class 0.2 EVT accurately. The method demonstrates the accurate evaluation of on-line monitoring of the metering performance on an EVT without a standard voltage transformer.

  17. A new separation and preconcentration method for selenium in some foods using modified silica gel with 2,6-diamino-4-phenil-1,3,5-triazine.

    PubMed

    Mendil, Durali; Demirci, Zafer; Uluozlu, Ozgur Dogan; Tuzen, Mustafa; Soylak, Mustafa

    2017-04-15

    A novel and simple solid phase extraction method was improved and recommended for selenium. Silica gel was modified with 2,6-diamino-4-phenil-1,3,5-triazine and characterized by FTIR, SEM and elemental analysis and used adsorbent for column solid phase extraction of selenium ions. The experimental parameters (pH, flow rates, amounts of the modified silica gel, concentration and type of eluent, volume of sample, etc.) on the recoveries of selenium were optimized. Standard reference materials were analyzed for validation of method. The present method was successfully applied to the detection of total selenium in water and microwave digested some food samples with quantitative recoveries (> 95%). The relative standard deviations were<8%. Matrix influences were not observed. The adsorption capacity of modified silica gel was 5.90mgg -1 . The LOD was 0.015μgL -1 . Enrichment factor was obtained as 50 for the introduced method. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Cost-Efficient Phase Noise Measurement

    NASA Astrophysics Data System (ADS)

    Perić, Ana; Bjelica, Milan

    2014-05-01

    In this paper, an automated system for oscillator phase noise measurement is described. The system is primarily intended for use in academic institutions, such as smaller university or research laboratories, as it deploys standard spectrum analyzer and free software. A method to compensate the effect of instrument intrinsic noise is proposed. Through series of experimental tests, good performances of our system are verified and compliance to theoretical expectations is demonstrated.

  19. Monitoring trends in bird populations: addressing background levels of annual variability in counts

    Treesearch

    Jared Verner; Kathryn L. Purcell; Jennifer G. Turner

    1996-01-01

    Point counting has been widely accepted as a method for monitoring trends in bird populations. Using a rigorously standardized protocol at 210 counting stations at the San Joaquin Experimental Range, Madera Co., California, we have been studying sources of variability in point counts of birds. Vegetation types in the study area have not changed during the 11 years of...

  20. Light scattering microscopy measurements of single nuclei compared with GPU-accelerated FDTD simulations

    NASA Astrophysics Data System (ADS)

    Stark, Julian; Rothe, Thomas; Kieß, Steffen; Simon, Sven; Kienle, Alwin

    2016-04-01

    Single cell nuclei were investigated using two-dimensional angularly and spectrally resolved scattering microscopy. We show that even for a qualitative comparison of experimental and theoretical data, the standard Mie model of a homogeneous sphere proves to be insufficient. Hence, an accelerated finite-difference time-domain method using a graphics processor unit and domain decomposition was implemented to analyze the experimental scattering patterns. The measured cell nuclei were modeled as single spheres with randomly distributed spherical inclusions of different size and refractive index representing the nucleoli and clumps of chromatin. Taking into account the nuclear heterogeneity of a large number of inclusions yields a qualitative agreement between experimental and theoretical spectra and illustrates the impact of the nuclear micro- and nanostructure on the scattering patterns.

  1. A numerical and experimental comparison of human head phantoms for compliance testing of mobile telephone equipment.

    PubMed

    Christ, Andreas; Chavannes, Nicolas; Nikoloski, Neviana; Gerber, Hans-Ulrich; Poković, Katja; Kuster, Niels

    2005-02-01

    A new human head phantom has been proposed by CENELEC/IEEE, based on a large scale anthropometric survey. This phantom is compared to a homogeneous Generic Head Phantom and three high resolution anatomical head models with respect to specific absorption rate (SAR) assessment. The head phantoms are exposed to the radiation of a generic mobile phone (GMP) with different antenna types and a commercial mobile phone. The phones are placed in the standardized testing positions and operate at 900 and 1800 MHz. The average peak SAR is evaluated using both experimental (DASY3 near field scanner) and numerical (FDTD simulations) techniques. The numerical and experimental results compare well and confirm that the applied SAR assessment methods constitute a conservative approach.

  2. Light scattering microscopy measurements of single nuclei compared with GPU-accelerated FDTD simulations.

    PubMed

    Stark, Julian; Rothe, Thomas; Kieß, Steffen; Simon, Sven; Kienle, Alwin

    2016-04-07

    Single cell nuclei were investigated using two-dimensional angularly and spectrally resolved scattering microscopy. We show that even for a qualitative comparison of experimental and theoretical data, the standard Mie model of a homogeneous sphere proves to be insufficient. Hence, an accelerated finite-difference time-domain method using a graphics processor unit and domain decomposition was implemented to analyze the experimental scattering patterns. The measured cell nuclei were modeled as single spheres with randomly distributed spherical inclusions of different size and refractive index representing the nucleoli and clumps of chromatin. Taking into account the nuclear heterogeneity of a large number of inclusions yields a qualitative agreement between experimental and theoretical spectra and illustrates the impact of the nuclear micro- and nanostructure on the scattering patterns.

  3. An experimental bioactive dental ceramic for metal-ceramic restorations: Textural characteristics and investigation of the mechanical properties.

    PubMed

    Goudouri, Ourania-Menti; Kontonasaki, Eleana; Papadopoulou, Lambrini; Manda, Marianthi; Kavouras, Panagiotis; Triantafyllidis, Konstantinos S; Stefanidou, Maria; Koidis, Petros; Paraskevopoulos, Konstantinos M

    2017-02-01

    The aim of this study was the evaluation of the textural characteristics of an experimental sol-gel derived feldspathic dental ceramic, which has already been proven bioactive and the investigation of its flexural strength through Weibull Statistical Analysis. The null hypothesis was that the flexural strength of the experimental and the commercial dental ceramic would be of the same order, resulting in a dental ceramic with apatite forming ability and adequate mechanical integrity. Although the flexural strength of the experimental ceramics was not statistically significant different compared to the commercial one, the amount of blind pores due to processing was greater. The textural characteristics of the experimental ceramic were in accordance with the standard low porosity levels reported for dental ceramics used for fixed prosthetic restorations. Feldspathic dental ceramics with typical textural characteristics and advanced mechanical properties as well as enhanced apatite forming ability can be synthesized through the sol-gel method. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Standardization of the experimental autoimmune myasthenia gravis (EAMG) model by immunization of rats with Torpedo californica acetylcholine receptors — Recommendations for methods and experimental designs

    PubMed Central

    Losen, Mario; Martinez-Martinez, Pilar; Molenaar, Peter C.; Lazaridis, Konstantinos; Tzartos, Socrates; Brenner, Talma; Duan, Rui-Sheng; Luo, Jie; Lindstrom, Jon; Kusner, Linda

    2015-01-01

    Myasthenia gravis (MG) with antibodies against the acetylcholine receptor (AChR) is characterized by a chronic, fatigable weakness of voluntary muscles. The production of autoantibodies involves the dysregulation of T cells which provide the environment for the development of autoreactive B cells. The symptoms are caused by destruction of the postsynaptic membrane and degradation of the AChR by IgG autoantibodies, predominantly of the G1 and G3 subclasses. Active immunization of animals with AChR from mammalian muscles, AChR from Torpedo or Electrophorus electric organs, and recombinant or synthetic AChR fragments generates a chronic model of MG, termed experimental autoimmune myasthenia gravis (EAMG). This model covers cellular mechanisms involved in the immune response against the AChR, e.g. antigen presentation, T cell-help and regulation, B cell selection and differentiation into plasma cells. Our aim is to define standard operation procedures and recommendations for the rat EAMG model using purified AChR from the Torpedo californica electric organ, in order to facilitate more rapid translation of preclinical proof of concept or efficacy studies into clinical trials and, ultimately, clinical practice. PMID:25796590

  5. Fertility preservation options in breast cancer patients.

    PubMed

    Kasum, Miro; von Wolff, Michael; Franulić, Daniela; Čehić, Ermin; Klepac-Pulanić, Tajana; Orešković, Slavko; Juras, Josip

    2015-01-01

    The purpose of this review is to analyse current options for fertility preservation in young women with breast cancer (BC). Considering an increasing number of BC survivors, owing to improvements in cancer treatment and delaying of childbearing, fertility preservation appears to be an important issue. Current fertility preservation options in BC survivors range from well-established standard techniques to experimental or investigational interventions. Among the standard options, random-start ovarian stimulation protocol represents a new technique, which significantly decreases the total time of the in vitro fertilisation cycle. However, in patients with oestrogen-sensitive tumours, stimulation protocols using aromatase inhibitors are currently preferred over tamoxifen regimens. Cryopreservation of embryos and oocytes are nowadays deemed the most successful techniques for fertility preservation in BC patients. GnRH agonists during chemotherapy represent an experimental method for fertility preservation due to conflicting long-term outcome results regarding its safety and efficacy. Cryopreservation of ovarian tissue, in vitro maturation of immature oocytes and other strategies are considered experimental and should only be offered within the context of a clinical trial. An early pretreatment referral to reproductive endocrinologists and oncologists should be suggested to young BC women at risk of infertility, concerning the risks and benefits of fertility preservation options.

  6. Interfacial concentrations of chloride and bromide in zwitterionic micelles with opposite dipoles: experimental determination by chemical trapping and a theoretical description.

    PubMed

    de Souza, Tereza Pereira; Chaimovich, Hernan; Fahr, Alfred; Schweitzer, Bianca; Agostinho Neto, Augusto; Cuccovia, Iolanda Midea

    2012-04-01

    Interfacial concentrations of chloride and bromide ions, with Li(+), Na(+), K(+), Rb(+), Cs(+), trimethylammonium (TMA(+)), Ca(2+), and Mg(2+) as counterions, were determined by chemical trapping in micelles formed by two zwitterionic surfactants, namely N-hexadecyl-N,N-dimethyl-3-ammonio-1-propanesulfonate (HPS) and hexadecylphosphorylcholine (HDPC) micelles. Appropriate standard curves for the chemical trapping method were obtained by measuring the product yields of chloride and bromide salts with 2,4,6-trimethyl-benzenediazonium (BF(4)) in the presence of low molecular analogs (N,N,N-trimethyl-propane sulfonate and methyl-phosphorylcholine) of the employed surfactants. The experimentally determined values for the local Br(-) (Cl(-)) concentrations were modeled by fully integrated non-linear Poisson Boltzmann equations. The best fits to all experimental data were obtained by considering that ions at the interface are not fixed at an adsorption site but are free to move in the interfacial plane. In addition, the calculation of ion distribution allowed the estimation of the degree of ion coverage by using standard chemical potential differences accounting for ion specificity. Copyright © 2012 Elsevier Inc. All rights reserved.

  7. DSMC study of oxygen shockwaves based on high-fidelity vibrational relaxation and dissociation models

    NASA Astrophysics Data System (ADS)

    Borges Sebastião, Israel; Kulakhmetov, Marat; Alexeenko, Alina

    2017-01-01

    This work evaluates high-fidelity vibrational-translational (VT) energy relaxation and dissociation models for pure O2 normal shockwave simulations with the direct simulation Monte Carlo (DSMC) method. The O2-O collisions are described using ab initio state-specific relaxation and dissociation models. The Macheret-Fridman (MF) dissociation model is adapted to the DSMC framework by modifying the standard implementation of the total collision energy (TCE) model. The O2-O2 dissociation is modeled with this TCE+MF approach, which is calibrated with O2-O ab initio data and experimental equilibrium dissociation rates. The O2-O2 vibrational relaxation is modeled via the Larsen-Borgnakke model, calibrated to experimental VT rates. All the present results are compared to experimental data and previous calculations available in the literature. It is found that, in general, the ab initio dissociation model is better than the TCE model at matching the shock experiments. Therefore, when available, efficient ab initio models are preferred over phenomenological models. We also show that the proposed TCE + MF formulation can be used to improve the standard TCE model results when ab initio data are not available or limited.

  8. Experimental Validation of Displacement Underestimation in ARFI Ultrasound

    PubMed Central

    Czernuszewicz, Tomasz J.; Streeter, Jason E.; Dayton, Paul A.; Gallippi, Caterina M.

    2014-01-01

    Acoustic radiation force impulse (ARFI) imaging is an elastography technique that uses ultrasonic pulses to both displace and track tissue motion. Previous modeling studies have shown that ARFI displacements are susceptible to underestimation due to lateral and elevational shearing that occurs within the tracking resolution cell. In this study, optical tracking was utilized to experimentally measure the displacement underestimation achieved by acoustic tracking using a clinical ultrasound system. Three optically translucent phantoms of varying stiffness were created, embedded with sub-wavelength diameter microspheres, and ARFI excitation pulses with F/1.5 or F/3 lateral focal configurations were transmitted from a standard linear array to induce phantom motion. Displacements were tracked using confocal optical and acoustic methods. As predicted by earlier FEM studies, significant acoustic displacement underestimation was observed for both excitation focal configurations; the maximum underestimation error was 35% of the optically measured displacement for the F/1.5 excitation pulse in the softest phantom. Using higher F/#, less tightly focused beams in the lateral dimension improved accuracy of displacements by approximately 10 percentage points. This work experimentally demonstrates limitations of ARFI implemented on a clinical scanner using a standard linear array and sets up a framework for future displacement tracking validation studies. PMID:23858054

  9. Constructing experimental designs for discrete-choice experiments: report of the ISPOR Conjoint Analysis Experimental Design Good Research Practices Task Force.

    PubMed

    Reed Johnson, F; Lancsar, Emily; Marshall, Deborah; Kilambi, Vikram; Mühlbacher, Axel; Regier, Dean A; Bresnahan, Brian W; Kanninen, Barbara; Bridges, John F P

    2013-01-01

    Stated-preference methods are a class of evaluation techniques for studying the preferences of patients and other stakeholders. While these methods span a variety of techniques, conjoint-analysis methods-and particularly discrete-choice experiments (DCEs)-have become the most frequently applied approach in health care in recent years. Experimental design is an important stage in the development of such methods, but establishing a consensus on standards is hampered by lack of understanding of available techniques and software. This report builds on the previous ISPOR Conjoint Analysis Task Force Report: Conjoint Analysis Applications in Health-A Checklist: A Report of the ISPOR Good Research Practices for Conjoint Analysis Task Force. This report aims to assist researchers specifically in evaluating alternative approaches to experimental design, a difficult and important element of successful DCEs. While this report does not endorse any specific approach, it does provide a guide for choosing an approach that is appropriate for a particular study. In particular, it provides an overview of the role of experimental designs for the successful implementation of the DCE approach in health care studies, and it provides researchers with an introduction to constructing experimental designs on the basis of study objectives and the statistical model researchers have selected for the study. The report outlines the theoretical requirements for designs that identify choice-model preference parameters and summarizes and compares a number of available approaches for constructing experimental designs. The task-force leadership group met via bimonthly teleconferences and in person at ISPOR meetings in the United States and Europe. An international group of experimental-design experts was consulted during this process to discuss existing approaches for experimental design and to review the task force's draft reports. In addition, ISPOR members contributed to developing a consensus report by submitting written comments during the review process and oral comments during two forum presentations at the ISPOR 16th and 17th Annual International Meetings held in Baltimore (2011) and Washington, DC (2012). Copyright © 2013 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  10. Naturally occurring levels of elements in fishes as determined by PIXE and XRF methods

    NASA Astrophysics Data System (ADS)

    Tallandini, L.; Giacobini, F.; Turchetto, M.; Galassini, S.; Liu, Q. X.; Shao, H. R.; Moschini, G.; Moro, R.; Gialanella, G.; Ghermandi, G.; Cecchi, R.; Injuk, J.; Valković, V.

    1989-04-01

    Naturally occurring levels of S, Cl, K, Ca, Cr, Mn, Fe, Ni, Cu, Zn, As, Se, Br, Sb, Sr and Pb were measured in the gills, liver and muscles of fishes ( Zosterisessor ophiocephalus Pall) in the northwestern region of the Adriatic Sea. The overall performance of PIXE and XRF methods was tested by the analysis of standard reference materials. The mean concentration values for elements were calculated from the distribution of experimentally determined concentration values. The obtained data are discussed in the framework of metal metabolism and toxicology.

  11. Precision phase estimation based on weak-value amplification

    NASA Astrophysics Data System (ADS)

    Qiu, Xiaodong; Xie, Linguo; Liu, Xiong; Luo, Lan; Li, Zhaoxue; Zhang, Zhiyou; Du, Jinglei

    2017-02-01

    In this letter, we propose a precision method for phase estimation based on the weak-value amplification (WVA) technique using a monochromatic light source. The anomalous WVA significantly suppresses the technical noise with respect to the intensity difference signal induced by the phase delay when the post-selection procedure comes into play. The phase measured precision of this method is proportional to the weak-value of a polarization operator in the experimental range. Our results compete well with the wide spectrum light phase weak measurements and outperform the standard homodyne phase detection technique.

  12. Research on Time Selection of Mass Sports in Tibetan Areas Plateau of Gansu Province Based on Environmental Science

    NASA Astrophysics Data System (ADS)

    Gao, Jike

    2018-01-01

    Through using the method of literature review, instrument measuring, questionnaire and mathematical statistics, this paper analyzed the current situation in Mass Sports of Tibetan Areas Plateau in Gansu Province. Through experimental test access to Tibetan areas in gansu province of air pollutants and meteorological index data as the foundation, control related national standard and exercise science, statistical analysis of data, the Tibetan plateau, gansu province people participate in physical exercise is dedicated to providing you with scientific methods and appropriate time.

  13. Identification of Curie temperature distributions in magnetic particulate systems

    NASA Astrophysics Data System (ADS)

    Waters, J.; Berger, A.; Kramer, D.; Fangohr, H.; Hovorka, O.

    2017-09-01

    This paper develops a methodology for extracting the Curie temperature distribution from magnetisation versus temperature measurements which are realizable by standard laboratory magnetometry. The method is integral in nature, robust against various sources of measurement noise, and can be adopted to a wide range of granular magnetic materials and magnetic particle systems. The validity and practicality of the method is demonstrated using large-scale Monte-Carlo simulations of an Ising-like model as a proof of concept, and general conclusions are drawn about its applicability to different classes of systems and experimental conditions.

  14. Calculating osmotic pressure of glucose solutions according to ASOG model and measuring it with air humidity osmometry.

    PubMed

    Wei, Guocui; Zhan, Tingting; Zhan, Xiancheng; Yu, Lan; Wang, Xiaolan; Tan, Xiaoying; Li, Chengrong

    2016-09-01

    The osmotic pressure of glucose solution at a wide concentration range was calculated using ASOG model and experimentally determined by our newly reported air humidity osmometry. The measurements from air humidity osmometry were compared with the well-established freezing point osmometry and ASOG model calculations at low concentrations and with only ASOG model calculations at high concentrations where no standard experimental method could serve as a reference for comparison. Results indicate that air humidity osmometry measurements are comparable to ASOG model calculations at a wide concentration range, while at low concentrations freezing point osmometry measurements provide better comparability with ASOG model calculations.

  15. Wavelength calibration of x-ray imaging crystal spectrometer on Joint Texas Experimental Tokamak

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yan, W.; Chen, Z. Y., E-mail: zychen@hust.edu.cn; Jin, W.

    2014-11-15

    The wavelength calibration of x-ray imaging crystal spectrometer is a key issue for the measurements of plasma rotation. For the lack of available standard radiation source near 3.95 Å and there is no other diagnostics to measure the core rotation for inter-calibration, an indirect method by using tokamak plasma itself has been applied on joint Texas experimental tokamak. It is found that the core toroidal rotation velocity is not zero during locked mode phase. This is consistent with the observation of small oscillations on soft x-ray signals and electron cyclotron emission during locked-mode phase.

  16. Experimental determination of the navigation error of the 4-D navigation, guidance, and control systems on the NASA B-737 airplane

    NASA Technical Reports Server (NTRS)

    Knox, C. E.

    1978-01-01

    Navigation error data from these flights are presented in a format utilizing three independent axes - horizontal, vertical, and time. The navigation position estimate error term and the autopilot flight technical error term are combined to form the total navigation error in each axis. This method of error presentation allows comparisons to be made between other 2-, 3-, or 4-D navigation systems and allows experimental or theoretical determination of the navigation error terms. Position estimate error data are presented with the navigation system position estimate based on dual DME radio updates that are smoothed with inertial velocities, dual DME radio updates that are smoothed with true airspeed and magnetic heading, and inertial velocity updates only. The normal mode of navigation with dual DME updates that are smoothed with inertial velocities resulted in a mean error of 390 m with a standard deviation of 150 m in the horizontal axis; a mean error of 1.5 m low with a standard deviation of less than 11 m in the vertical axis; and a mean error as low as 252 m with a standard deviation of 123 m in the time axis.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marshall, William BJ J; Rearden, Bradley T

    The validation of neutron transport methods used in nuclear criticality safety analyses is required by consensus American National Standards Institute/American Nuclear Society (ANSI/ANS) standards. In the last decade, there has been an increased interest in correlations among critical experiments used in validation that have shared physical attributes and which impact the independence of each measurement. The statistical methods included in many of the frequently cited guidance documents on performing validation calculations incorporate the assumption that all individual measurements are independent, so little guidance is available to practitioners on the topic. Typical guidance includes recommendations to select experiments from multiple facilitiesmore » and experiment series in an attempt to minimize the impact of correlations or common-cause errors in experiments. Recent efforts have been made both to determine the magnitude of such correlations between experiments and to develop and apply methods for adjusting the bias and bias uncertainty to account for the correlations. This paper describes recent work performed at Oak Ridge National Laboratory using the Sampler sequence from the SCALE code system to develop experimental correlations using a Monte Carlo sampling technique. Sampler will be available for the first time with the release of SCALE 6.2, and a brief introduction to the methods used to calculate experiment correlations within this new sequence is presented in this paper. Techniques to utilize these correlations in the establishment of upper subcritical limits are the subject of a companion paper and will not be discussed here. Example experimental uncertainties and correlation coefficients are presented for a variety of low-enriched uranium water-moderated lattice experiments selected for use in a benchmark exercise by the Working Party on Nuclear Criticality Safety Subgroup on Uncertainty Analysis in Criticality Safety Analyses. The results include studies on the effect of fuel rod pitch on the correlations, and some observations are also made regarding difficulties in determining experimental correlations using the Monte Carlo sampling technique.« less

  18. Towards standardized assessment of endoscope optical performance: geometric distortion

    NASA Astrophysics Data System (ADS)

    Wang, Quanzeng; Desai, Viraj N.; Ngo, Ying Z.; Cheng, Wei-Chung; Pfefer, Joshua

    2013-12-01

    Technological advances in endoscopes, such as capsule, ultrathin and disposable devices, promise significant improvements in safety, clinical effectiveness and patient acceptance. Unfortunately, the industry lacks test methods for preclinical evaluation of key optical performance characteristics (OPCs) of endoscopic devices that are quantitative, objective and well-validated. As a result, it is difficult for researchers and developers to compare image quality and evaluate equivalence to, or improvement upon, prior technologies. While endoscope OPCs include resolution, field of view, and depth of field, among others, our focus in this paper is geometric image distortion. We reviewed specific test methods for distortion and then developed an objective, quantitative test method based on well-defined experimental and data processing steps to evaluate radial distortion in the full field of view of an endoscopic imaging system. Our measurements and analyses showed that a second-degree polynomial equation could well describe the radial distortion curve of a traditional endoscope. The distortion evaluation method was effective for correcting the image and can be used to explain other widely accepted evaluation methods such as picture height distortion. Development of consensus standards based on promising test methods for image quality assessment, such as the method studied here, will facilitate clinical implementation of innovative endoscopic devices.

  19. Applications of Automation Methods for Nonlinear Fracture Test Analysis

    NASA Technical Reports Server (NTRS)

    Allen, Phillip A.; Wells, Douglas N.

    2013-01-01

    As fracture mechanics material testing evolves, the governing test standards continue to be refined to better reflect the latest understanding of the physics of the fracture processes involved. The traditional format of ASTM fracture testing standards, utilizing equations expressed directly in the text of the standard to assess the experimental result, is self-limiting in the complexity that can be reasonably captured. The use of automated analysis techniques to draw upon a rich, detailed solution database for assessing fracture mechanics tests provides a foundation for a new approach to testing standards that enables routine users to obtain highly reliable assessments of tests involving complex, non-linear fracture behavior. Herein, the case for automating the analysis of tests of surface cracks in tension in the elastic-plastic regime is utilized as an example of how such a database can be generated and implemented for use in the ASTM standards framework. The presented approach forms a bridge between the equation-based fracture testing standards of today and the next generation of standards solving complex problems through analysis automation.

  20. Development of method for experimental determination of wheel-rail contact forces and contact point position by using instrumented wheelset

    NASA Astrophysics Data System (ADS)

    Bižić, Milan B.; Petrović, Dragan Z.; Tomić, Miloš C.; Djinović, Zoran V.

    2017-07-01

    This paper presents the development of a unique method for experimental determination of wheel-rail contact forces and contact point position by using the instrumented wheelset (IWS). Solutions of key problems in the development of IWS are proposed, such as the determination of optimal locations, layout, number and way of connecting strain gauges as well as the development of an inverse identification algorithm (IIA). The base for the solution of these problems is the wheel model and results of FEM calculations, while IIA is based on the method of blind source separation using independent component analysis. In the first phase, the developed method was tested on a wheel model and a high accuracy was obtained (deviations of parameters obtained with IIA and really applied parameters in the model are less than 2%). In the second phase, experimental tests on the real object or IWS were carried out. The signal-to-noise ratio was identified as the main influential parameter on the measurement accuracy. Тhе obtained results have shown that the developed method enables measurement of vertical and lateral wheel-rail contact forces Q and Y and their ratio Y/Q with estimated errors of less than 10%, while the estimated measurement error of contact point position is less than 15%. At flange contact and higher values of ratio Y/Q or Y force, the measurement errors are reduced, which is extremely important for the reliability and quality of experimental tests of safety against derailment of railway vehicles according to the standards UIC 518 and EN 14363. The obtained results have shown that the proposed method can be successfully applied in solving the problem of high accuracy measurement of wheel-rail contact forces and contact point position using IWS.

  1. Use of Kinesiology Taping in Rehabilitation after Knee Arthroplasty: a Randomised Clinical Study.

    PubMed

    Woźniak-Czekierda, Weronika; Woźniak, Kamil; Hadamus, Anna; Białoszewski, Dariusz

    2017-10-31

    Proprioception and body balance after knee arthroplasty have a considerable impact on restoration of joint function and a normal gait pattern. Kinesiology Taping (KT) is a method that may be able to influence these factors. The aim of this study was to assess the effects of KT application on sensorimotor efficiency, balance and gait in patients undergoing rehabili-ta--tion after knee replacement surgery. The study involved 120 male and female patients (mean age was 69 years) after total knee repla-cement. The patients were randomly assigned to one of two groups: Experimental Group (n=51) and Control Group (n=60). Both groups underwent standard rehabilitation lasting 20 days. In addition, the Experimental Group received KT applications. Treat-ment outcomes were assessed based on tests evaluating balance, joint position sense and functional gait performance, conducted both before and after the therapy. Statistically significant improvements were noted across all the parameters assessed in the Experimental Group (p<0.005). Significant improvements were also seen in the Control Group (p<0.005), but, in percentage terms, the improvement was higher in the Experimental Group. The only exception was the right/left foot load distribution, whose symmetry improved proportionally in both groups. 1. Patients after knee replacement surgery have considerable proprioception deficits, impaired body balance and reduced functional performance, which may increase the risk of falls in this group of patients. 2. Both standard physiotherapy and combination therapy with Kinesiology Taping (modified by the present authors) used in patients after knee arthroplasty may considerably improve the level of proprioception, body balance and overall functional performance. 3. The technique of dynamic taping proposed in this paper may optimise standard physiotherapy used in patients after knee arthroplasty and increase its clinical efficacy. Further studies are required.

  2. An Implementation-Focused Bio/Algorithmic Workflow for Synthetic Biology.

    PubMed

    Goñi-Moreno, Angel; Carcajona, Marta; Kim, Juhyun; Martínez-García, Esteban; Amos, Martyn; de Lorenzo, Víctor

    2016-10-21

    As synthetic biology moves away from trial and error and embraces more formal processes, workflows have emerged that cover the roadmap from conceptualization of a genetic device to its construction and measurement. This latter aspect (i.e., characterization and measurement of synthetic genetic constructs) has received relatively little attention to date, but it is crucial for their outcome. An end-to-end use case for engineering a simple synthetic device is presented, which is supported by information standards and computational methods and focuses on such characterization/measurement. This workflow captures the main stages of genetic device design and description and offers standardized tools for both population-based measurement and single-cell analysis. To this end, three separate aspects are addressed. First, the specific vector features are discussed. Although device/circuit design has been successfully automated, important structural information is usually overlooked, as in the case of plasmid vectors. The use of the Standard European Vector Architecture (SEVA) is advocated for selecting the optimal carrier of a design and its thorough description in order to unequivocally correlate digital definitions and molecular devices. A digital version of this plasmid format was developed with the Synthetic Biology Open Language (SBOL) along with a software tool that allows users to embed genetic parts in vector cargoes. This enables annotation of a mathematical model of the device's kinetic reactions formatted with the Systems Biology Markup Language (SBML). From that point onward, the experimental results and their in silico counterparts proceed alongside, with constant feedback to preserve consistency between them. A second aspect involves a framework for the calibration of fluorescence-based measurements. One of the most challenging endeavors in standardization, metrology, is tackled by reinterpreting the experimental output in light of simulation results, allowing us to turn arbitrary fluorescence units into relative measurements. Finally, integration of single-cell methods into a framework for multicellular simulation and measurement is addressed, allowing standardized inspection of the interplay between the carrier chassis and the culture conditions.

  3. Evaluation of extraction methods for ochratoxin A detection in cocoa beans employing HPLC.

    PubMed

    Mishra, Rupesh K; Catanante, Gaëlle; Hayat, Akhtar; Marty, Jean-Louis

    2016-01-01

    Cocoa is an important ingredient for the chocolate industry and for many food products. However, it is prone to contamination by ochratoxin A (OTA), which is highly toxic and potentially carcinogenic to humans. In this work, four different extraction methods were tested and compared based on their recoveries. The best protocol was established which involves an organic solvent-free extraction method for the detection of OTA in cocoa beans using 1% sodium hydrogen carbonate (NaHCO3) in water within 30 min. The extraction method is rapid (as compared with existing methods), simple, reliable and practical to perform without complex experimental set-ups. The cocoa samples were freshly extracted and cleaned-up using immunoaffinity column (IAC) for HPLC analysis using a fluorescence detector. Under the optimised condition, the limit of detection (LOD) and limit of quantification (LOQ) for OTA were 0.62 and 1.25 ng ml(-1) respectively in standard solutions. The method could successfully quantify OTA in naturally contaminated samples. Moreover, good recoveries of OTA were obtained up to 86.5% in artificially spiked cocoa samples, with a maximum relative standard deviation (RSD) of 2.7%. The proposed extraction method could determine OTA at the level 1.5 µg kg(-)(1), which surpassed the standards set by the European Union for cocoa (2 µg kg(-1)). In addition, an efficiency comparison of IAC and molecular imprinted polymer (MIP) column was also performed and evaluated.

  4. The Intramolecular Hydrogen Bond N-H···S in 2,2'-Diaminodiphenyl Disulfide: Experimental and Computational Thermochemistry.

    PubMed

    Ramos, Fernando; Flores, Henoc; Hernández-Pérez, Julio M; Sandoval-Lira, Jacinto; Camarillo, E Adriana

    2018-01-11

    The intramolecular hydrogen bond of the N-H···S type has been investigated sparingly by thermochemical and computational methods. In order to study this interaction, the standard molar enthalpies of formation in gaseous phase of diphenyl disulfide, 2,2'-diaminodiphenyl disulfide and 4,4'-diaminodiphenyl disulfide at T = 298.15 K were determined by experimental thermochemical methods and computational calculations. The experimental enthalpies of formation in gas-phase were obtained from enthalpies of formation in crystalline phase and enthalpies of sublimation. Enthalpies of formation in crystalline phase were obtained using rotatory bomb combustion calorimetry. By thermogravimetry, enthalpies of vaporization were obtained, and by combining them with enthalpies of fusion, the enthalpies of sublimation were calculated. The Gaussian-4 procedure and the atomization method were applied to obtain enthalpies of formation in gas-phase of the compounds under study. Theoretical and experimental values are in good agreement. Through natural bond orbital (NBO) analysis and a topological analysis of the electronic density, the intramolecular hydrogen bridge (N-H···S) in the 2,2'-diaminodiphenyl disulfide was confirmed. Finally, an enthalpic difference of 11.8 kJ·mol -1 between the 2,2'-diaminodiphenyl disulfide and 4,4'-diaminodiphenyl disulfide was found, which is attributed to the intramolecular N-H···S interaction.

  5. Experimenter effects on behavioral test scores of eight inbred mouse strains under the influence of ethanol

    PubMed Central

    Bohlen, Martin; Hayes, Erika R.; Bohlen, Benjamin; Bailoo, Jeremy; Crabbe, John C.; Wahlsten, Douglas

    2016-01-01

    Eight standard inbred mouse strains were evaluated for ethanol effects on a refined battery of behavioral tests in a study that was originally designed to assess the influence of rat odors in the colony on mouse behaviors. As part of the design of the study, two experimenters conducted the tests, and the study was carefully balanced so that equal numbers of mice in all groups and times of day were tested by each experimenter. A defect in airflow in the facility compromised the odor manipulation, and in fact the different odor exposure groups did not differ in their behaviors. The two experimenters, however, obtained markedly different results for three of the tests. Certain of the experimenter effects arose from the way they judged behaviors that were not automated and had to be rated by the experimenter, such as slips on the balance beam. Others were not evident prior to ethanol injection but had a major influence after the injection. For several measures, the experimenter effects were notably different for different inbred strains. Methods to evaluate and reduce the impact of experimenter effects in future research are discussed. PMID:24933191

  6. Experimenter effects on behavioral test scores of eight inbred mouse strains under the influence of ethanol.

    PubMed

    Bohlen, Martin; Hayes, Erika R; Bohlen, Benjamin; Bailoo, Jeremy D; Crabbe, John C; Wahlsten, Douglas

    2014-10-01

    Eight standard inbred mouse strains were evaluated for ethanol effects on a refined battery of behavioral tests in a study that was originally designed to assess the influence of rat odors in the colony on mouse behaviors. As part of the design of the study, two experimenters conducted the tests, and the study was carefully balanced so that equal numbers of mice in all groups and times of day were tested by each experimenter. A defect in airflow in the facility compromised the odor manipulation, and in fact the different odor exposure groups did not differ in their behaviors. The two experimenters, however, obtained markedly different results for three of the tests. Certain of the experimenter effects arose from the way they judged behaviors that were not automated and had to be rated by the experimenter, such as slips on the balance beam. Others were not evident prior to ethanol injection but had a major influence after the injection. For several measures, the experimenter effects were notably different for different inbred strains. Methods to evaluate and reduce the impact of experimenter effects in future research are discussed. Copyright © 2014 Elsevier B.V. All rights reserved.

  7. [Determination of 27 elements in Maca nationality's medicine by microwave digestion ICP-MS].

    PubMed

    Yu, Gui-fang; Zhong, Hai-jie; Hu, Jun-hua; Wang, Jing; Huang, Wen-zhe; Wang, Zhen-zhong; Xiao, Wei

    2015-12-01

    An analysis method has been established to test 27 elements (Li, Be, B, Mg, Al, Sc, Ti, V, Cr, Mn, Fe, Co, Ni, Cu, Zn, Ga, As, Sr, Mo, Cd, Sn, Sb, Ba, La, Hg, Pb, Bi) in Maca nationality's medicine with microwave digestion-ICP-MS. Sample solutions were analyzed by ICP-MS after microwave digestion, and the contents of elements were calculated according to their calibration curves, and internal standard method was adopted to reduce matrix effect and other interference effects. The experimental results showed that the linear relations of all the elements were very good; the correlation coefficient (r) was 0.9994-1.0000 (Hg was 0.9982) ; the limits of detection were 0.003-2.662 microg x L(-1); the relative standard deviations for all elements of reproducibility were lower than 5% (except the individual elements); the recovery rate were 78.5%-123.7% with RSD lower than 5% ( except the individual elements). The analytical results of standard material showed acceptable agreement with the certified values. This method was applicable to determinate the contents of multi-elements in Maca which had a high sensitivity, good specificity and good repeatability, and provide basis for the quality control of Maca.

  8. Rapid exploration of configuration space with diffusion-map-directed molecular dynamics.

    PubMed

    Zheng, Wenwei; Rohrdanz, Mary A; Clementi, Cecilia

    2013-10-24

    The gap between the time scale of interesting behavior in macromolecular systems and that which our computational resources can afford often limits molecular dynamics (MD) from understanding experimental results and predicting what is inaccessible in experiments. In this paper, we introduce a new sampling scheme, named diffusion-map-directed MD (DM-d-MD), to rapidly explore molecular configuration space. The method uses a diffusion map to guide MD on the fly. DM-d-MD can be combined with other methods to reconstruct the equilibrium free energy, and here, we used umbrella sampling as an example. We present results from two systems: alanine dipeptide and alanine-12. In both systems, we gain tremendous speedup with respect to standard MD both in exploring the configuration space and reconstructing the equilibrium distribution. In particular, we obtain 3 orders of magnitude of speedup over standard MD in the exploration of the configurational space of alanine-12 at 300 K with DM-d-MD. The method is reaction coordinate free and minimally dependent on a priori knowledge of the system. We expect wide applications of DM-d-MD to other macromolecular systems in which equilibrium sampling is not affordable by standard MD.

  9. Rapid Exploration of Configuration Space with Diffusion Map-directed-Molecular Dynamics

    PubMed Central

    Zheng, Wenwei; Rohrdanz, Mary A.; Clementi, Cecilia

    2013-01-01

    The gap between the timescale of interesting behavior in macromolecular systems and that which our computational resources can afford oftentimes limits Molecular Dynamics (MD) from understanding experimental results and predicting what is inaccessible in experiments. In this paper, we introduce a new sampling scheme, named Diffusion Map-directed-MD (DM-d-MD), to rapidly explore molecular configuration space. The method uses diffusion map to guide MD on the fly. DM-d-MD can be combined with other methods to reconstruct the equilibrium free energy, and here we used umbrella sampling as an example. We present results from two systems: alanine dipeptide and alanine-12. In both systems we gain tremendous speedup with respect to standard MD both in exploring the configuration space and reconstructing the equilibrium distribution. In particular, we obtain 3 orders of magnitude of speedup over standard MD in the exploration of the configurational space of alanine-12 at 300K with DM-d-MD. The method is reaction coordinate free and minimally dependent on a priori knowledge of the system. We expect wide applications of DM-d-MD to other macromolecular systems in which equilibrium sampling is not affordable by standard MD. PMID:23865517

  10. Chapter 4- Fertility preservation in women with breast cancer

    PubMed Central

    Rodriguez-Wallberg, Kenny A.; Oktay, Kutluk

    2010-01-01

    Fertility preservation is an important issue for young women diagnosed with breast cancer. The most well-established options for fertility preservation in cancer patients, embryo and oocyte cryopreservation, have not been traditionally offered to breast cancer patients as estradiol rise during standard stimulation protocols may not be safe for those patients. Potentially safer stimulation protocols using tamoxifen and aromatase inhibitors induce lower levels of estradiol while similar results in terms of number of oocyte and embryo obtained to standard protocols. Cryopreservation of immature oocytes and ovarian cortical tissue, both still experimental methods, are also fertility preservation options for breast cancer patients. PMID:21048442

  11. High efficiency video coding for ultrasound video communication in m-health systems.

    PubMed

    Panayides, A; Antoniou, Z; Pattichis, M S; Pattichis, C S; Constantinides, A G

    2012-01-01

    Emerging high efficiency video compression methods and wider availability of wireless network infrastructure will significantly advance existing m-health applications. For medical video communications, the emerging video compression and network standards support low-delay and high-resolution video transmission, at the clinically acquired resolution and frame rates. Such advances are expected to further promote the adoption of m-health systems for remote diagnosis and emergency incidents in daily clinical practice. This paper compares the performance of the emerging high efficiency video coding (HEVC) standard to the current state-of-the-art H.264/AVC standard. The experimental evaluation, based on five atherosclerotic plaque ultrasound videos encoded at QCIF, CIF, and 4CIF resolutions demonstrates that 50% reductions in bitrate requirements is possible for equivalent clinical quality.

  12. Hydrogen Fuel Quality

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rockward, Tommy

    2012-07-16

    For the past 6 years, open discussions and/or meetings have been held and are still on-going with OEM, Hydrogen Suppliers, other test facilities from the North America Team and International collaborators regarding experimental results, fuel clean-up cost, modeling, and analytical techniques to help determine levels of constituents for the development of an international standard for hydrogen fuel quality (ISO TC197 WG-12). Significant progress has been made. The process for the fuel standard is entering final stages as a result of the technical accomplishments. The objectives are to: (1) Determine the allowable levels of hydrogen fuel contaminants in support of themore » development of science-based international standards for hydrogen fuel quality (ISO TC197 WG-12); and (2) Validate the ASTM test method for determining low levels of non-hydrogen constituents.« less

  13. Use of the Budyko Framework to Estimate the Virtual Water Content in Shijiazhuang Plain, North China

    NASA Astrophysics Data System (ADS)

    Zhang, E.; Yin, X.

    2017-12-01

    One of the most challenging steps in implementing analysis of virtual water content (VWC) of agricultural crops is how to properly assess the volume of consumptive water use (CWU) for crop production. In practice, CWU is considered equivalent to the crop evapotranspiration (ETc). Following the crop coefficient method, ETc can be calculated under standard or non-standard conditions by multiplying the reference evapotranspiration (ET0) by one or a few coefficients. However, when current crop growing conditions deviate from standard conditions, accurately determining the coefficients under non-standard conditions remains to be a complicated process and requires lots of field experimental data. Based on regional surface water-energy balance, this research integrates the Budyko framework into the traditional crop coefficient approach to simplify the coefficients determination. This new method enables us to assess the volume of agricultural VWC only based on some hydrometeorological data and agricultural statistic data in regional scale. To demonstrate the new method, we apply it to the Shijiazhuang Plain, which is an agricultural irrigation area in the North China Plain. The VWC of winter wheat and summer maize is calculated and we further subdivide VWC into blue and green water components. Compared with previous studies in this study area, VWC calculated by the Budyko-based crop coefficient approach uses less data and agrees well with some of the previous research. It shows that this new method may serve as a more convenient tool for assessing VWC.

  14. Distinguishing Buried Objects in Extremely Shallow Underground by Frequency Response Using Scanning Laser Doppler Vibrometer

    NASA Astrophysics Data System (ADS)

    Abe, Touma; Sugimoto, Tsuneyoshi

    2010-07-01

    A sound wave vibration using a scanning laser Doppler vibrometer are used as a method of exploring and imaging an extremely shallow underground. Flat speakers are used as a vibration source. We propose a method of distinguishing a buried object using a response range of a frequencies corresponding to a vibration velocities. Buried objects (plastic containers, a hollow steel can, an unglazed pot, and a stone) are distinguished using a response range of frequencies. Standardization and brightness imaging are used as methods of discrimination. As a result, it was found that the buried objects show different response ranges of frequencies. From the experimental results, we confirmed the effectiveness of our proposed method.

  15. Measurements of the effective atomic numbers of minerals using bremsstrahlung produced by low-energy electrons

    NASA Astrophysics Data System (ADS)

    Czarnecki, S.; Williams, S.

    2017-12-01

    The accuracy of a method for measuring the effective atomic numbers of minerals using bremsstrahlung intensities has been investigated. The method is independent of detector-efficiency and maximum accelerating voltage. In order to test the method, experiments were performed which involved low-energy electrons incident on thick malachite, pyrite, and galena targets. The resultant thick-target bremsstrahlung was compared to bremsstrahlung produced using a standard target, and experimental effective atomic numbers were calculated using data from a previous study (in which the Z-dependence of thick-target bremsstrahlung was studied). Comparisons of the results to theoretical values suggest that the method has potential for implementation in energy-dispersive X-ray spectroscopy systems.

  16. Linear programming phase unwrapping for dual-wavelength digital holography.

    PubMed

    Wang, Zhaomin; Jiao, Jiannan; Qu, Weijuan; Yang, Fang; Li, Hongru; Tian, Ailing; Asundi, Anand

    2017-01-20

    A linear programming phase unwrapping method in dual-wavelength digital holography is proposed and verified experimentally. The proposed method uses the square of height difference as a convergence standard and theoretically gives the boundary condition in a searching process. A simulation was performed by unwrapping step structures at different levels of Gaussian noise. As a result, our method is capable of recovering the discontinuities accurately. It is robust and straightforward. In the experiment, a microelectromechanical systems sample and a cylindrical lens were measured separately. The testing results were in good agreement with true values. Moreover, the proposed method is applicable not only in digital holography but also in other dual-wavelength interferometric techniques.

  17. Efficacy and safety of sunitinib alternate day regimen in patients with metastatic renal cell carcinoma in Japan: Comparison with standard 4/2 schedule.

    PubMed

    Ohba, Kojiro; Miyata, Yasuyoshi; Yasuda, Takuji; Asai, Akihiro; Mitsunari, Kensuke; Matsuo, Tomohiro; Mochizuki, Yasushi; Matsunaga, Noriko; Sakai, Hideki

    2018-06-01

    Sunitinib is a standard agent for metastatic renal cell carcinoma (mRCC). The standard schedule, 4 weeks-on followed by 2 weeks-off (4/2 schedule), often does not maintain an adequate dosage because of the severe adverse events (AEs). We compared the efficacy and safety of an alternative every other day (q.a.d.) dosing with that of the 4/2 schedule in mRCC patients. Of the 55 Japanese patients, 32 and 23 were administered 4/2 (standard group) and q.a.d. schedules (50 or 37.5 mg, every other day; experimental groups), respectively. The AEs, anticancer effects, and trough plasma concentrations of sunitinib were compared between them. The most common AE in the standard group was thrombocytopenia (43.2%), but it was observed in only two patients in the experimental group (8.7%). Although leukopenia and hand-foot syndrome were both detected in six patients (18.8%) in the standard group, no patients had these AEs in the experimental group. The incidence of dose interruption in the experimental group (21.7%) was significantly lower than that in the standard group was (59.4%, P = 0.005). Time to progression (TTP) and overall survival (OS) of the experimental group were better than those of the standard group (P < 0.001 and P = 0.002, respectively). Mean plasma levels in the experimental group (64.83 ng/mL) were significantly lower than those in the standard group (135.82 ng/mL, P < 0.001) were. Sunitinib administered q.a.d. was safe and effective for mRCC patients. We speculate that the persistent optimal drug plasma concentrations contributed to these effects. © 2018 The Authors. Asia-Pacific Journal of Clinical Oncology Published by John Wiley & Sons Australia, Ltd.

  18. Comparison of direct and indirect methods of measuring airborne chrysotile fibre concentration.

    PubMed

    Eypert-Blaison, Celine; Veissiere, Sylvie; Rastoix, Olivier; Kauffer, Edmond

    2010-01-01

    Transmission electron microscopy observations most frequently form a basis for estimating asbestos fibre concentration in the environment and in buildings with asbestos-containing materials. Sampled fibres can be transferred to microscope grids by applying either a direct [ISO (1995) Draft International ISO/DIS 10312. Ambient air. Determination of asbestos fibres. Direct transfer transmission electron microscopy procedure. Geneva, Switzerland: International Standardization Organization] or an indirect [AFNOR (1996) Détermination de la concentration en fibres d'amiante par microscopie électronique à transmission-Méthode indirecte. Cedex, France: AFNOR, p. 42; ISO (1997) Draft International ISO/DIS 13794. Ambient air. Determination of asbestos fibres. Indirect-transfer transmission electron microscopy procedure. Geneva, Switzerland: International Standardization Organization] method. In the latter case, ISO Standard 13794 recommends filtering calcination residues either on a polycarbonate (PC) filter (PC indirect method) or on a cellulose ester (CE) membrane (CE indirect method). The PC indirect method requires that fibres deposited on a PC filter be covered by a carbon layer, whereas in the CE indirect method, the CE membrane has to be directly processed using a method described in ISO Standard 10312. The purpose of this study was to compare results obtained using, on the one hand, direct preparation methods and, on the other hand, PC indirect or CE indirect methods, for counting asbestos fibres deposited on filters as a result of liquid filtration or air sampling. In direct method-based preparation, we observed that an etching time of 6-14 min does not affect the measured densities, except for fibres <1 microm deposited by liquid filtration. Moreover, in all cases, the direct method gives higher densities than the PC indirect method because of possible fibre disappearance when using the carbon evaporator implemented in the PC indirect method. The CE membrane used for sample preparation in the CE indirect method is collapsed prior to passing it through the carbon evaporator, so the fibres are less likely to disappear at this stage. We then note that the resulting fibre densities for chrysotile-loaded filters prepared using the direct method are close to those obtained with filters prepared using the CE indirect method. Our study therefore shows that, under the implemented experimental conditions, the PC and CE indirect preparation methods described in ISO Standard 13794 are not equivalent.

  19. Puzzle of magnetic moments of Ni clusters revisited using quantum Monte Carlo method.

    PubMed

    Lee, Hung-Wen; Chang, Chun-Ming; Hsing, Cheng-Rong

    2017-02-28

    The puzzle of the magnetic moments of small nickel clusters arises from the discrepancy between values predicted using density functional theory (DFT) and experimental measurements. Traditional DFT approaches underestimate the magnetic moments of nickel clusters. Two fundamental problems are associated with this puzzle, namely, calculating the exchange-correlation interaction accurately and determining the global minimum structures of the clusters. Theoretically, the two problems can be solved using quantum Monte Carlo (QMC) calculations and the ab initio random structure searching (AIRSS) method correspondingly. Therefore, we combined the fixed-moment AIRSS and QMC methods to investigate the magnetic properties of Ni n (n = 5-9) clusters. The spin moments of the diffusion Monte Carlo (DMC) ground states are higher than those of the Perdew-Burke-Ernzerhof ground states and, in the case of Ni 8-9 , two new ground-state structures have been discovered using the DMC calculations. The predicted results are closer to the experimental findings, unlike the results predicted in previous standard DFT studies.

  20. Experimental demonstration of tri-aperture Differential Synthetic Aperture Ladar

    NASA Astrophysics Data System (ADS)

    Zhao, Zhilong; Huang, Jianyu; Wu, Shudong; Wang, Kunpeng; Bai, Tao; Dai, Ze; Kong, Xinyi; Wu, Jin

    2017-04-01

    A tri-aperture Differential Synthetic Aperture Ladar (DSAL) is demonstrated in laboratory, which is configured by using one common aperture to transmit the illuminating laser and another two along-track receiving apertures to collect back-scattered laser signal for optical heterodyne detection. The image formation theory on this tri-aperture DSAL shows that there are two possible methods to reconstruct the azimuth Phase History Data (PHD) for aperture synthesis by following standard DSAL principle, either method resulting in a different matched filter as well as an azimuth image resolution. The experimental setup of the tri-aperture DSAL adopts a frequency chirped laser of about 40 mW in 1550 nm wavelength range as the illuminating source and an optical isolator composed of a polarizing beam-splitter and a quarter wave plate to virtually line the three apertures in the along-track direction. Various DSAL images up to target distance of 12.9 m are demonstrated using both PHD reconstructing methods.

  1. Distributed phase birefringence measurements based on polarization correlation in phase-sensitive optical time-domain reflectometers.

    PubMed

    Soto, Marcelo A; Lu, Xin; Martins, Hugo F; Gonzalez-Herraez, Miguel; Thévenaz, Luc

    2015-09-21

    In this paper a technique to measure the distributed birefringence profile along optical fibers is proposed and experimentally validated. The method is based on the spectral correlation between two sets of orthogonally-polarized measurements acquired using a phase-sensitive optical time-domain reflectometer (ϕOTDR). The correlation between the two measured spectra gives a resonance (correlation) peak at a frequency detuning that is proportional to the local refractive index difference between the two orthogonal polarization axes of the fiber. In this way the method enables local phase birefringence measurements at any position along optical fibers, so that any longitudinal fluctuation can be precisely evaluated with metric spatial resolution. The method has been experimentally validated by measuring fibers with low and high birefringence, such as standard single-mode fibers as well as conventional polarization-maintaining fibers. The technique has potential applications in the characterization of optical fibers for telecommunications as well as in distributed optical fiber sensing.

  2. A novel series of thiosemicarbazone drugs: From synthesis to structure

    NASA Astrophysics Data System (ADS)

    Ebrahimi, Hossein Pasha; Hadi, Jabbar S.; Alsalim, Tahseen A.; Ghali, Thaer S.; Bolandnazar, Zeinab

    2015-02-01

    A new series of thiosemicarbazones (TSCs) and their 1,3,4-thiadiazolines (TDZs) containing acetamide group have been synthesized from thiosemicarbazide compounds by the reaction of TSCs with cyclic ketones as well as aromatic aldehydes. The structures of newly synthesized 1,3,4-thiadiazole derivatives obtained by heterocyclization of the TSCs with acetic anhydride were experimentally characterized by spectral methods using IR, 1H NMR, 13C NMR and mass spectroscopic methods. Furthermore, the structural, thermodynamic, and electronic properties of the studied compounds were also studied theoretically by performing Density Functional Theory (DFT) to access reliable results to the experimental values. The molecular geometry, the highest occupied molecular orbital (HOMO), the lowest unoccupied molecular orbital (LUMO) and Mulliken atomic charges of the studied compounds have been calculated at the B3LYP method and standard 6-31+G(d,p) basis set starting from optimized geometry. The theoretical 13C chemical shift results were also calculated using the gauge independent atomic orbital (GIAO) approach and their respective linear correlations were obtained.

  3. Experimental determination of self-heating and self-ignition risks associated with the dusts of agricultural materials commonly stored in silos.

    PubMed

    Ramírez, Alvaro; García-Torrent, Javier; Tascón, Alberto

    2010-03-15

    Agricultural products stored in silos, and their dusts, can undergo oxidation and self-heating, increasing the risk of self-ignition and therefore of fires and explosions. The aim of the present work was to determine the thermal susceptibility (as reflected by the Maciejasz index, the temperature of the emission of flammable volatile substances and the combined information provided by the apparent activation energy and the oxidation temperature) of icing sugar, bread-making flour, maize, wheat, barley, alfalfa, and soybean dusts, using experimental methods for the characterisation of different types of coal (no standardised procedure exists for characterising the thermal susceptibility of either coal or agricultural products). In addition, the thermal stability of wheat, i.e., the risk of self-ignition determined as a function of sample volume, ignition temperature and storage time, was determined using the methods outlined in standard EN 15188:2007. The advantages and drawbacks of the different methods used are discussed. (c) 2009 Elsevier B.V. All rights reserved.

  4. From nationwide standardized testing to school-based alternative embedded assessment in Israel: Students' performance in the matriculation 2000 project

    NASA Astrophysics Data System (ADS)

    Dori, Yehudit J.

    2003-01-01

    Matriculation 2000 was a 5-year project aimed at moving from the nationwide traditional examination system in Israel to a school-based alternative embedded assessment. Encompassing 22 high schools from various communities in the country, the Project aimed at fostering deep understanding, higher-order thinking skills, and students' engagement in learning through alternative teaching and embedded assessment methods. This article describes research conducted during the fifth year of the Project at 2 experimental and 2 control schools. The research objective was to investigate students' learning outcomes in chemistry and biology in the Matriculation 2000 Project. The assumption was that alternative embedded assessment has some effect on students' performance. The experimental students scored significantly higher than their control group peers on low-level assignments and more so on assignments that required higher-order thinking skills. The findings indicate that given adequate support and teachers' consent and collaboration, schools can transfer from nationwide or statewide standardized testing to school-based alter-native embedded assessment.

  5. Mechanical Properties Analysis of 4340 Steel Specimen Heat Treated in Oven and Quenching in Three Different Fluids

    NASA Astrophysics Data System (ADS)

    Fakir, Rachid; Barka, Noureddine; Brousseau, Jean

    2018-03-01

    This paper proposes a statistical approach to analyze the mechanical properties of a standard test specimen, of cylindrical geometry and in steel 4340, with a diameter of 6 mm, heat-treated and quenched in three different fluids. Samples were evaluated in standard tensile test to access their characteristic quantities: hardness, modulus of elasticity, yield strength, tensile strength and ultimate deformation. The proposed approach is gradually being built (a) by a presentation of the experimental device, (b) a presentation of the experimental plan and the results of the mechanical tests, (c) anova analysis of variance and a representation of the output responses using the RSM response surface method, and (d) an analysis of the results and discussion. The feasibility and effectiveness of the proposed approach leads to a precise and reliable model capable of predicting the variation of mechanical properties, depending on the tempering temperature, the tempering time and the cooling capacity of the quenching medium.

  6. Mechanical Property Evaluation of Palm/Glass Sandwiched Fiber Reinforced Polymer Composite in Comparison with few natural composites

    NASA Astrophysics Data System (ADS)

    Raja Dhas, J. Edwin; Pradeep, P.

    2017-10-01

    Natural fibers available plenty can be used as reinforcements in development of eco friendly polymer composites. The less utilized palm leaf stalk fibers sandwiched with artificial glass fibers was researched in this work to have a better reinforcement in preparing a green composite. The commercially available polyester resin blend with coconut shell filler in nano form was used as matrix to sandwich these composites. Naturally available Fibers of palm leaf stalk, coconut leaf stalk, raffia and oil palm were extracted and treated with potassium permanganate solution which enhances the properties. For experimentation four different plates were fabricated using these fibers adopting hand lay-up method. These sandwiched composite plates are further machined to obtain ASTM standards Specimens which are mechanically tested as per standards. Experimental results reveal that the alkali treated palm leaf stalk fiber based polymer composite shows appreciable results than the others. Hence the developed composite can be recommended for fabrication of automobile parts.

  7. Hybrid density functional theory band structure engineering in hematite

    NASA Astrophysics Data System (ADS)

    Pozun, Zachary D.; Henkelman, Graeme

    2011-06-01

    We present a hybrid density functional theory (DFT) study of doping effects in α-Fe2O3, hematite. Standard DFT underestimates the band gap by roughly 75% and incorrectly identifies hematite as a Mott-Hubbard insulator. Hybrid DFT accurately predicts the proper structural, magnetic, and electronic properties of hematite and, unlike the DFT+U method, does not contain d-electron specific empirical parameters. We find that using a screened functional that smoothly transitions from 12% exact exchange at short ranges to standard DFT at long range accurately reproduces the experimental band gap and other material properties. We then show that the antiferromagnetic symmetry in the pure α-Fe2O3 crystal is broken by all dopants and that the ligand field theory correctly predicts local magnetic moments on the dopants. We characterize the resulting band gaps for hematite doped by transition metals and the p-block post-transition metals. The specific case of Pd doping is investigated in order to correlate calculated doping energies and optical properties with experimentally observed photocatalytic behavior.

  8. Experimental study on performance verification tests for coordinate measuring systems with optical distance sensors

    NASA Astrophysics Data System (ADS)

    Carmignato, Simone

    2009-01-01

    Optical sensors are increasingly used for dimensional and geometrical metrology. However, the lack of international standards for testing optical coordinate measuring systems is currently limiting the traceability of measurements and the easy comparison of different optical systems. This paper presents an experimental investigation on artefacts and procedures for testing coordinate measuring systems equipped with optical distance sensors. The work is aimed at contributing to the standardization of testing methods. The VDI/VDE 2617-6.2:2005 guideline, which is probably the most complete document available at the state of the art for testing systems with optical distance sensors, is examined with specific experiments. Results from the experiments are discussed, with particular reference to the tests used for determining the following characteristics: error of indication for size measurement, probing error and structural resolution. Particular attention is given to the use of artefacts alternative to gauge blocks for determining the error of indication for size measurement.

  9. Bi-harmonic cantilever design for improved measurement sensitivity in tapping-mode atomic force microscopy.

    PubMed

    Loganathan, Muthukumaran; Bristow, Douglas A

    2014-04-01

    This paper presents a method and cantilever design for improving the mechanical measurement sensitivity in the atomic force microscopy (AFM) tapping mode. The method uses two harmonics in the drive signal to generate a bi-harmonic tapping trajectory. Mathematical analysis demonstrates that the wide-valley bi-harmonic tapping trajectory is as much as 70% more sensitive to changes in the sample topography than the standard single-harmonic trajectory typically used. Although standard AFM cantilevers can be driven in the bi-harmonic tapping trajectory, they require large forcing at the second harmonic. A design is presented for a bi-harmonic cantilever that has a second resonant mode at twice its first resonant mode, thereby capable of generating bi-harmonic trajectories with small forcing signals. Bi-harmonic cantilevers are fabricated by milling a small cantilever on the interior of a standard cantilever probe using a focused ion beam. Bi-harmonic drive signals are derived for standard cantilevers and bi-harmonic cantilevers. Experimental results demonstrate better than 30% improvement in measurement sensitivity using the bi-harmonic cantilever. Images obtained through bi-harmonic tapping exhibit improved sharpness and surface tracking, especially at high scan speeds and low force fields.

  10. Automatic patient-adaptive bleeding detection in a capsule endoscopy

    NASA Astrophysics Data System (ADS)

    Jung, Yun Sub; Kim, Yong Ho; Lee, Dong Ha; Lee, Sang Ho; Song, Jeong Joo; Kim, Jong Hyo

    2009-02-01

    We present a method for patient-adaptive detection of bleeding region for a Capsule Endoscopy (CE) images. The CE system has 320x320 resolution and transmits 3 images per second to receiver during around 10-hour. We have developed a technique to detect the bleeding automatically utilizing color spectrum transformation (CST) method. However, because of irregular conditions like organ difference, patient difference and illumination condition, detection performance is not uniform. To solve this problem, the detection method in this paper include parameter compensation step which compensate irregular image condition using color balance index (CBI). We have investigated color balance through sequential 2 millions images. Based on this pre-experimental result, we defined ΔCBI to represent deviate of color balance compared with standard small bowel color balance. The ΔCBI feature value is extracted from each image and used in CST method as parameter compensation constant. After candidate pixels were detected using CST method, they were labeled and examined with a bleeding character. We tested our method with 4,800 images in 12 patient data set (9 abnormal, 3 normal). Our experimental results show the proposed method achieves (before patient adaptive method : 80.87% and 74.25%, after patient adaptive method : 94.87% and 96.12%) of sensitivity and specificity.

  11. Feasibility and Initial Dosimetric Findings for a Randomized Trial Using Dose-Painted Multiparametric Magnetic Resonance Imaging–Defined Targets in Prostate Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bossart, Elizabeth L., E-mail: EBossart@med.miami.edu; Stoyanova, Radka; Sandler, Kiri

    2016-06-01

    Purpose: To compare dosimetric characteristics with multiparametric magnetic resonance imaging–identified imaging tumor volume (gross tumor volume, GTV), prostate clinical target volume and planning target volume, and organs at risk (OARs) for 2 treatment techniques representing 2 arms of an institutional phase 3 randomized trial of hypofractionated external beam image guided highly targeted radiation therapy. Methods and Materials: Group 1 (n=20) patients were treated before the trial inception with the standard dose prescription. Each patient had an additional treatment plan generated per the experimental arm. A total of 40 treatment plans were compared (20 plans for each technique). Group 2 (n=15)more » consists of patients currently accrued to the hypofractionated external beam image guided highly targeted radiation therapy trial. Plans were created as per the treatment arm, with additional plans for 5 of the group 2 experimental arm with a 3-mm expansion in the imaging GTV. Results: For all plans in both patient groups, planning target volume coverage ranged from 95% to 100%; GTV coverage of 89.3 Gy for the experimental treatment plans ranged from 95.2% to 99.8%. For both groups 1 and 2, the percent volumes of rectum/anus and bladder receiving 40 Gy, 65 Gy, and 80 Gy were smaller in the experimental plans than in the standard plans. The percent volume at 1 Gy per fraction and 1.625 Gy per fraction were compared between the standard and the experimental arms, and these were found to be equivalent. Conclusions: The dose per fraction to the OARs can be made equal even when giving a large simultaneous integrated boost to the GTV. The data suggest that a GTV margin may be added without significant dose effects on the OARs.« less

  12. Quantitative Examination of Corrosion Damage by Means of Thermal Response Measurements

    NASA Technical Reports Server (NTRS)

    Rajic, Nik

    1998-01-01

    Two computational methods are presented that enable a characterization of corrosion damage to be performed from thermal response measurements derived from a standard flash thermographic inspection. The first is based upon a one dimensional analytical solution to the heat diffusion equation and presumes the lateral extent of damage is large compared to the residual structural thickness, such that lateral heat diffusion effects can be considered insignificant. The second proposed method, based on a finite element optimization scheme, addresses the more general case where these conditions are not met. Results from an experimental application are given to illustrate the precision, robustness and practical efficacy of both methods.

  13. 3D topography of biologic tissue by multiview imaging and structured light illumination

    NASA Astrophysics Data System (ADS)

    Liu, Peng; Zhang, Shiwu; Xu, Ronald

    2014-02-01

    Obtaining three-dimensional (3D) information of biologic tissue is important in many medical applications. This paper presents two methods for reconstructing 3D topography of biologic tissue: multiview imaging and structured light illumination. For each method, the working principle is introduced, followed by experimental validation on a diabetic foot model. To compare the performance characteristics of these two imaging methods, a coordinate measuring machine (CMM) is used as a standard control. The wound surface topography of the diabetic foot model is measured by multiview imaging and structured light illumination methods respectively and compared with the CMM measurements. The comparison results show that the structured light illumination method is a promising technique for 3D topographic imaging of biologic tissue.

  14. Bias Corrections for Standardized Effect Size Estimates Used with Single-Subject Experimental Designs

    ERIC Educational Resources Information Center

    Ugille, Maaike; Moeyaert, Mariola; Beretvas, S. Natasha; Ferron, John M.; Van den Noortgate, Wim

    2014-01-01

    A multilevel meta-analysis can combine the results of several single-subject experimental design studies. However, the estimated effects are biased if the effect sizes are standardized and the number of measurement occasions is small. In this study, the authors investigated 4 approaches to correct for this bias. First, the standardized effect…

  15. 21 CFR 130.17 - Temporary permits for interstate shipment of experimental packs of food varying from the...

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... experimental packs of food varying from the requirements of definitions and standards of identity. 130.17... standards of identity. (a) The Food and Drug Administration recognizes that before petitions to amend food... of food varying from applicable definitions and standards of identity prescribed under section 401 of...

  16. 21 CFR 130.17 - Temporary permits for interstate shipment of experimental packs of food varying from the...

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... experimental packs of food varying from the requirements of definitions and standards of identity. 130.17... standards of identity. (a) The Food and Drug Administration recognizes that before petitions to amend food... of food varying from applicable definitions and standards of identity prescribed under section 401 of...

  17. Estimation of size of red blood cell aggregates using backscattering property of high-frequency ultrasound: In vivo evaluation

    NASA Astrophysics Data System (ADS)

    Kurokawa, Yusaku; Taki, Hirofumi; Yashiro, Satoshi; Nagasawa, Kan; Ishigaki, Yasushi; Kanai, Hiroshi

    2016-07-01

    We propose a method for assessment of the degree of red blood cell (RBC) aggregation using the backscattering property of high-frequency ultrasound. In this method, the scattering property of RBCs is extracted from the power spectrum of RBC echoes normalized by that from the posterior wall of a vein. In an experimental study using a phantom, employing the proposed method, the sizes of microspheres 5 and 20 µm in diameter were estimated to have mean values of 4.7 and 17.3 µm and standard deviations of 1.9 and 1.4 µm, respectively. In an in vivo experimental study, we compared the results between three healthy subjects and four diabetic patients. The average estimated scatterer diameters in healthy subjects at rest and during avascularization were 7 and 28 µm, respectively. In contrast, those in diabetic patients receiving both antithrombotic therapy and insulin therapy were 11 and 46 µm, respectively. These results show that the proposed method has high potential for clinical application to assess RBC aggregation, which may be related to the progress of diabetes.

  18. Mode Identification of High-Amplitude Pressure Waves in Liquid Rocket Engines

    NASA Astrophysics Data System (ADS)

    EBRAHIMI, R.; MAZAHERI, K.; GHAFOURIAN, A.

    2000-01-01

    Identification of existing instability modes from experimental pressure measurements of rocket engines is difficult, specially when steep waves are present. Actual pressure waves are often non-linear and include steep shocks followed by gradual expansions. It is generally believed that interaction of these non-linear waves is difficult to analyze. A method of mode identification is introduced. After presumption of constituent modes, they are superposed by using a standard finite difference scheme for solution of the classical wave equation. Waves are numerically produced at each end of the combustion tube with different wavelengths, amplitudes, and phases with respect to each other. Pressure amplitude histories and phase diagrams along the tube are computed. To determine the validity of the presented method for steep non-linear waves, the Euler equations are numerically solved for non-linear waves, and negligible interactions between these waves are observed. To show the applicability of this method, other's experimental results in which modes were identified are used. Results indicate that this simple method can be used in analyzing complicated pressure signal measurements.

  19. Online Determination of Trace Amounts of Tannic Acid in Colored Tannery Wastewaters by Automatic Reference Flow Injection Analysis

    PubMed Central

    Wei, Liang

    2010-01-01

    A simple, rapid and sensitive method was proposed for online determination of tannic acid in colored tannery wastewater by automatic reference flow injection analysis. Based on the tannic acid reduction phosphotungstic acid to form blue compound in pH 12.38 alkaline solutions, the shade of blue compound is in a linear relation to the content of tannic acid at the point of the maximum absorption peak of 760 nm. The optimal experimental conditions had been obtained. The linear range of the proposed method was between 200 μg L−1 to 80 mg L−1 and the detection limit was 0.58 μg L−1. The relative standard deviation was 3.08% and 2.43% for 500 μg L−1 and 40 mg L−1 of tannic acid standard solution, respectively, (n = 10). The method had been successfully applied to determination of tannic acid in colored tannery wastewaters and the analytical results were satisfactory. PMID:20508812

  20. Geant4 simulations of NIST beam neutron lifetime experiment

    NASA Astrophysics Data System (ADS)

    Valete, Daniel; Crawford, Bret; BL2 Collaboration Collaboration

    2017-09-01

    A free neutron is unstable and its decay is described by the Standard Model as the transformation of a down quark into an up quark through the weak interaction. Precise measurements of the neutron lifetime test the validity of the theory of the weak interaction and provide useful information for the predictions of the theory of Big Bang nucleosynthesis of the primordial helium abundance in the universe and the number of different types of light neutrinos Nν. The predominant experimental methods for determination of the neutron lifetime are commonly called `beam' and `bottle' methods, and the most recent uses of each method do not agree with each other within their stated uncertainties. An improved experiment of the beam technique, which uses magnetic and electric fields to trap and guide the decay protons of a beam of cold neutrons to a detector, is in progress at the National Institute of Standards and Technology, Gaithersburg, MD with a precision goal of 0.1. I acknowledge the support of the Cross-Diciplinary Institute at Gettysburg College.

  1. [A study of biomechanical method for urine test based on color difference estimation].

    PubMed

    Wang, Chunhong; Zhou, Yue; Zhao, Hongxia; Zhou, Fengkun

    2008-02-01

    The biochemical analysis of urine is an important inspection and diagnosis method in hospitals. The conventional method of urine analysis covers mainly colorimetric visual appraisement and automation detection, in which the colorimetric visual appraisement technique has been superseded basically, and the automation detection method is adopted in hospital; moreover, the price of urine biochemical analyzer on market is around twenty thousand RMB yuan (Y), which is hard to enter into ordinary families. It is known that computer vision system is not subject to the physiological and psychological influence of person, its appraisement standard is objective and steady. Therefore, according to the color theory, we have established a computer vision system, which can carry through collection, management, display, and appraisement of color difference between the color of standard threshold value and the color of urine test paper after reaction with urine liquid, and then the level of an illness can be judged accurately. In this paper, we introduce the Urine Test Biochemical Analysis method, which is new and can be popularized in families. Experimental result shows that this test method is easy-to-use and cost-effective. It can realize the monitoring of a whole course and can find extensive applications.

  2. Capacitive sensor for engine oil deterioration measurement

    NASA Astrophysics Data System (ADS)

    Shinde, Harish; Bewoor, Anand

    2018-04-01

    A simple system or mechanism for engine Oil (lubricating oil) deterioration monitoring is a need. As engine oil is an important element in I C engines and it is exposed to various strains depending on the operating conditions. If it becomes contaminated with dirt and metal particles, it can become too thick or thin and loses its protective properties, leads to unwanted friction. In turn, to avoid an engine failure, the oil must be changed before it loses its protective properties, which may be harmful to engine which deteriorates vehicle performance. At the same time, changing the lubricant too early, cause inefficient use of already depleting resources, also unwanted impact on the environment and economic reasons. Hence, it will be always helpful to know the quality of the oil under use. With this objective, the research work had been undertaken to develop a simple capacitance sensor for quantification of the quality of oil under use. One of the investigated parameter to quantify oil degradation is Viscosity (as per standard testing procedure: DIN 51562-1). In this research work, an alternative method proposed which analyzing change in capacitance of oil, to quantify the quality of oil underuse and compared to a conventional standard method. The experimental results reported in this paper shows trend for the same. Engine oil of grade SAE 15W40 used for light-duty vehicle, vans and passenger cars is used for experimentation. Suggested method can form a base for further research to develop a cost-effective method for indicating the time to change in engine oil quality have been presented.

  3. An improved procedure for integrated behavioral z-scoring illustrated with modified Hole Board behavior of male inbred laboratory mice.

    PubMed

    Labots, M Maaike; Laarakker, M C Marijke; Schetters, D Dustin; Arndt, S S Saskia; van Lith, H A Hein

    2018-01-01

    Guilloux et al. introduced: integrated behavioral z-scoring, a method for behavioral phenotyping of mice. Using this method multiple ethological variables can be combined to show an overall description of a certain behavioral dimension or motivational system. However, a problem may occur when the control group used for the calculation has a standard deviation of zero or when no control group is present to act as a reference group. In order to solve these problems, an improved procedure is suggested: taking the pooled data as reference. For this purpose a behavioral study with male mice from three inbred strains was carried out. The integrated behavioral z-scoring methodology was applied, thereby taking five different reference group options. The outcome regarding statistical significance and practical importance was compared. Significant effects and effect sizes were influenced by the choice of the reference group. In some cases it was impossible to use a certain population and condition, because one or more behavioral variables in question had a standard deviation of zero. Based on the improved method, male mice from the three inbred strains differed regarding activity and anxiety. Taking the method described by Guilloux et al. as basis, the present procedure improved the generalizability to all types of experimental designs in animal behavioral research. To solve the aforementioned problems and to avoid getting the diagnosis of data manipulation, the pooled data (combining the data from all experimental groups in a study) as reference option is recommended. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. A concept for improved fire-safety through coated fillers

    NASA Technical Reports Server (NTRS)

    Ramohalli, K.

    1977-01-01

    A possible method is examined for obtaining a high value of thermal conductivity before ignition and a low value after ignition in standard composite materials. The idea is to coat fiberglass, alumina trihydrate, and similar fillers with specially selected chemicals prior to using polymer resins. The amount of the coat constitutes typically less than 5% of the material's total weight. The experimental results obtained are consistent with the basic concept.

  5. Plenoptic projection fluorescence tomography.

    PubMed

    Iglesias, Ignacio; Ripoll, Jorge

    2014-09-22

    A new method to obtain the three-dimensional localization of fluorochrome distributions in micrometric samples is presented. It uses a microlens array coupled to the image port of a standard microscope to obtain tomographic data by a filtered back-projection algorithm. Scanning of the microlens array is proposed to obtain a dense data set for reconstruction. Simulation and experimental results are shown and the implications of this approach in fast 3D imaging are discussed.

  6. Standardization of experimental parameters for LLLT studies

    NASA Astrophysics Data System (ADS)

    Magrini, Taciana D.; Santos, Arnaldo R., Jr.; da Silva Martinho, Herculano

    2012-03-01

    The aim of this work was to create and do characterization of a setup for irradiation of cultured cells with laser light in which light intensity is homogeneous and to create a method for calculating what exactly the quantity of light used in the irradiation is. The characterization was done by evaluating intensity distributions and by evaluation of irradiated in vitro cell viability with different configurations of the apparatus.

  7. Instructor/Operator Display Evaluation Methods

    DTIC Science & Technology

    1981-03-01

    14 IV. Experimental Desln.n ........ .................... ... 16 V. Procedure ............................................. 18 VI, Results...plies and thus the length of time during which the information must be retained. 16 ...... Nu -n4- (0 4-)~ R IW~~ IH I CDWC * I t I’ CI I~ _ _ I _ _17...standard deviation (S.D.) of 3.0 years. Total flight hours averaged 2248 (S.D. = 858). Current equipment for 16 pilots was the C-130 in which they

  8. The Influence of Length of School Day on Grade 4 and Grade 5 Language Arts and Mathematics Performance in the State of New Jersey

    ERIC Educational Resources Information Center

    Plevier, Meghan M.

    2016-01-01

    The purpose of this relational, non-experimental, explanatory, cross sectional study with quantitative methods was to explain the influence of length of school day, if any, on Grade 4 and Grade 5 student achievement in Language Arts and Mathematics as measured by the high-stakes New Jersey standardized test entitled New Jersey Assessment of Skills…

  9. Refractivity variations and propagation at Ultra High Frequency

    NASA Astrophysics Data System (ADS)

    Alam, I.; Najam-Ul-Islam, M.; Mujahid, U.; Shah, S. A. A.; Ul Haq, Rizwan

    Present framework is established to deal with the refractivity variations normally affected the radio waves propagation at different frequencies, ranges and different environments. To deal such kind of effects, many researchers proposed several methodologies. One method is to use the parameters from meteorology to investigate these effects of variations in refractivity on propagation. These variations are region specific and we have selected a region of one kilometer height over the English Channel. We have constructed different modified refractivity profiles based on the local meteorological data. We have recorded more than 48 million received signal strength from a communication links of 50 km operating at 2015 MHz in the Ultra High Frequency band giving path loss between transmitting and receiving stations of the experimental setup. We have used parabolic wave equation method to simulate an hourly value of signal strength and compared the obtained simulated loss to the experimental loss. The analysis is made to compute refractivity distribution of standard (STD) and ITU (International Telecommunication Union) refractivity profiles for various evaporation ducts. It is found that a standard refractivity profile is better than the ITU refractivity profiles for the region at 2015 MHz. Further, it is inferred from the analysis of results that 10 m evaporation duct height is the dominant among all evaporation duct heights considered in the research.

  10. Osteosarcoma Overview.

    PubMed

    Lindsey, Brock A; Markel, Justin E; Kleinerman, Eugenie S

    2017-06-01

    Osteosarcoma (OS) is the most common primary malignancy of bone and patients with metastatic disease or recurrences continue to have very poor outcomes. Unfortunately, little prognostic improvement has been generated from the last 20 years of research and a new perspective is warranted. OS is extremely heterogeneous in both its origins and manifestations. Although multiple associations have been made between the development of osteosarcoma and race, gender, age, various genomic alterations, and exposure situations among others, the etiology remains unclear and controversial. Noninvasive diagnostic methods include serum markers like alkaline phosphatase and a growing variety of imaging techniques including X-ray, computed tomography, magnetic resonance imaging, and positron emission as well as combinations thereof. Still, biopsy and microscopic examination are required to confirm the diagnosis and carry additional prognostic implications such as subtype classification and histological response to neoadjuvant chemotherapy. The current standard of care combines surgical and chemotherapeutic techniques, with a multitude of experimental biologics and small molecules currently in development and some in clinical trial phases. In this review, in addition to summarizing the current understanding of OS etiology, diagnostic methods, and the current standard of care, our group describes various experimental therapeutics and provides evidence to encourage a potential paradigm shift toward the introduction of immunomodulation, which may offer a more comprehensive approach to battling cancer pleomorphism.

  11. Generating partially correlated noise—A comparison of methods

    PubMed Central

    Hartmann, William M.; Cho, Yun Jin

    2011-01-01

    There are three standard methods for generating two channels of partially correlated noise: the two-generator method, the three-generator method, and the symmetric-generator method. These methods allow an experimenter to specify a target cross correlation between the two channels, but actual generated noises show statistical variability around the target value. Numerical experiments were done to compare the variability for those methods as a function of the number of degrees of freedom. The results of the experiments quantify the stimulus uncertainty in diverse binaural psychoacoustical experiments: incoherence detection, perceived auditory source width, envelopment, noise localization∕lateralization, and the masking level difference. The numerical experiments found that when the elemental generators have unequal powers, the different methods all have similar variability. When the powers are constrained to be equal, the symmetric-generator method has much smaller variability than the other two. PMID:21786899

  12. Statistical detection of EEG synchrony using empirical bayesian inference.

    PubMed

    Singh, Archana K; Asoh, Hideki; Takeda, Yuji; Phillips, Steven

    2015-01-01

    There is growing interest in understanding how the brain utilizes synchronized oscillatory activity to integrate information across functionally connected regions. Computing phase-locking values (PLV) between EEG signals is a popular method for quantifying such synchronizations and elucidating their role in cognitive tasks. However, high-dimensionality in PLV data incurs a serious multiple testing problem. Standard multiple testing methods in neuroimaging research (e.g., false discovery rate, FDR) suffer severe loss of power, because they fail to exploit complex dependence structure between hypotheses that vary in spectral, temporal and spatial dimension. Previously, we showed that a hierarchical FDR and optimal discovery procedures could be effectively applied for PLV analysis to provide better power than FDR. In this article, we revisit the multiple comparison problem from a new Empirical Bayes perspective and propose the application of the local FDR method (locFDR; Efron, 2001) for PLV synchrony analysis to compute FDR as a posterior probability that an observed statistic belongs to a null hypothesis. We demonstrate the application of Efron's Empirical Bayes approach for PLV synchrony analysis for the first time. We use simulations to validate the specificity and sensitivity of locFDR and a real EEG dataset from a visual search study for experimental validation. We also compare locFDR with hierarchical FDR and optimal discovery procedures in both simulation and experimental analyses. Our simulation results showed that the locFDR can effectively control false positives without compromising on the power of PLV synchrony inference. Our results from the application locFDR on experiment data detected more significant discoveries than our previously proposed methods whereas the standard FDR method failed to detect any significant discoveries.

  13. Overview of the Aeroelastic Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Heeg, Jennifer; Chwalowski, Pawel; Florance, Jennifer P.; Wieseman, Carol D.; Schuster, David M.; Perry, Raleigh B.

    2013-01-01

    The Aeroelastic Prediction Workshop brought together an international community of computational fluid dynamicists as a step in defining the state of the art in computational aeroelasticity. This workshop's technical focus was prediction of unsteady pressure distributions resulting from forced motion, benchmarking the results first using unforced system data. The most challenging aspects of the physics were identified as capturing oscillatory shock behavior, dynamic shock-induced separated flow and tunnel wall boundary layer influences. The majority of the participants used unsteady Reynolds-averaged Navier Stokes codes. These codes were exercised at transonic Mach numbers for three configurations and comparisons were made with existing experimental data. Substantial variations were observed among the computational solutions as well as differences relative to the experimental data. Contributing issues to these differences include wall effects and wall modeling, non-standardized convergence criteria, inclusion of static aeroelastic deflection, methodology for oscillatory solutions, post-processing methods. Contributing issues pertaining principally to the experimental data sets include the position of the model relative to the tunnel wall, splitter plate size, wind tunnel expansion slot configuration, spacing and location of pressure instrumentation, and data processing methods.

  14. Menthol-induced bleaching rapidly and effectively provides experimental aposymbiotic sea anemones (Aiptasia sp.) for symbiosis investigations.

    PubMed

    Matthews, Jennifer L; Sproles, Ashley E; Oakley, Clinton A; Grossman, Arthur R; Weis, Virginia M; Davy, Simon K

    2016-02-01

    Experimental manipulation of the symbiosis between cnidarians and photosynthetic dinoflagellates (Symbiodinium spp.) is crucial to advancing the understanding of the cellular mechanisms involved in host-symbiont interactions, and overall coral reef ecology. The anemone Aiptasia sp. is a model for cnidarian-dinoflagellate symbiosis, and notably it can be rendered aposymbiotic (i.e. dinoflagellate-free) and re-infected with a range of Symbiodinium types. Various methods exist for generating aposymbiotic hosts; however, they can be hugely time consuming and not wholly effective. Here, we optimise a method using menthol for production of aposymbiotic Aiptasia. The menthol treatment produced aposymbiotic hosts within just 4 weeks (97-100% symbiont loss), and the condition was maintained long after treatment when anemones were held under a standard light:dark cycle. The ability of Aiptasia to form a stable symbiosis appeared to be unaffected by menthol exposure, as demonstrated by successful re-establishment of the symbiosis when anemones were experimentally re-infected. Furthermore, there was no significant impact on photosynthetic or respiratory performance of re-infected anemones. © 2016. Published by The Company of Biologists Ltd.

  15. Polarization-polarization correlation measurement --- Experimental test of the PPCO methods

    NASA Astrophysics Data System (ADS)

    Droste, Ch.; Starosta, K.; Wierzchucka, A.; Morek, T.; Rohoziński, S. G.; Srebrny, J.; Wesolowski, E.; Bergstrem, M.; Herskind, B.

    1998-04-01

    A significant fraction of modern multidetector arrays used for "in-beam" gamma-ray spectroscopy consist of a detectors which are sensitive to linear polarization of gamma quanta. This yields the opportunity to carry out correlation measurements between the gamma rays registered in polarimeters to get information concerning spins and parities of excited nuclear states. The aim of the present work was to study the ability of the polarization- polarization correlation method (the PPCO method). The correlation between the linear polarization of one gamma quantum and the polarization of the second quantum emitted in a cascade from an oriented nucleus (due to a heavy ion reaction) was studied in detail. The appropriate formulae and methods of analysis are presented. The experimental test of the method was performed using the EUROGAM II array. The CLOVER detectors are the parts of the array used as polarimeters. The ^164Yb nucleus was produced via the ^138Ba(^30Si, 4n) reaction. It was found that the PPCO method together with the standard DCO analysis and the polarization- direction correlation method (PDCO) can be helpful for spin, parity and multipolarity assignments. The results suggest that the PPCO method can be applied to modern spectrometers in which a large number of detectors (e.g. CLOVER) are sensitive to polarization of gamma rays.

  16. The impact of manipulating personal standards on eating attitudes and behaviour

    PubMed Central

    Shafran, Roz; Lee, Michelle; Payne, Elizabeth; Fairburn, Christopher G.

    2006-01-01

    The relationship between perfectionism and eating disorders is well established and is of theoretical interest. This study used an experimental design to test the hypothesis that manipulating personal standards, a central feature of perfectionism, would influence eating attitudes and behaviour. Forty-one healthy women were randomly assigned either to a high personal standards condition (n=18) or to a low personal standards condition for 24 h (n=23). Measures of personal standards, perfectionism, and eating attitudes and behaviour were taken before and after the experimental manipulation. The manipulation was successful. After the manipulation, participants in the high personal standards condition ate fewer high calorie foods, made more attempts to restrict the overall amount of food eaten, and had significantly more regret after eating than those in the low personal standards condition. Other variables remained unchanged. It is concluded that experimental analyses can be of value in elucidating causal connections between perfectionism and eating attitudes and behaviour. PMID:16257388

  17. Quantification of integrated HIV DNA by repetitive-sampling Alu-HIV PCR on the basis of poisson statistics.

    PubMed

    De Spiegelaere, Ward; Malatinkova, Eva; Lynch, Lindsay; Van Nieuwerburgh, Filip; Messiaen, Peter; O'Doherty, Una; Vandekerckhove, Linos

    2014-06-01

    Quantification of integrated proviral HIV DNA by repetitive-sampling Alu-HIV PCR is a candidate virological tool to monitor the HIV reservoir in patients. However, the experimental procedures and data analysis of the assay are complex and hinder its widespread use. Here, we provide an improved and simplified data analysis method by adopting binomial and Poisson statistics. A modified analysis method on the basis of Poisson statistics was used to analyze the binomial data of positive and negative reactions from a 42-replicate Alu-HIV PCR by use of dilutions of an integration standard and on samples of 57 HIV-infected patients. Results were compared with the quantitative output of the previously described Alu-HIV PCR method. Poisson-based quantification of the Alu-HIV PCR was linearly correlated with the standard dilution series, indicating that absolute quantification with the Poisson method is a valid alternative for data analysis of repetitive-sampling Alu-HIV PCR data. Quantitative outputs of patient samples assessed by the Poisson method correlated with the previously described Alu-HIV PCR analysis, indicating that this method is a valid alternative for quantifying integrated HIV DNA. Poisson-based analysis of the Alu-HIV PCR data enables absolute quantification without the need of a standard dilution curve. Implementation of the CI estimation permits improved qualitative analysis of the data and provides a statistical basis for the required minimal number of technical replicates. © 2014 The American Association for Clinical Chemistry.

  18. Determination of B-complex vitamins in pharmaceutical formulations by surface-enhanced Raman spectroscopy.

    PubMed

    Junior, Benedito Roberto Alvarenga; Soares, Frederico Luis Felipe; Ardila, Jorge Armando; Durango, Luis Guillermo Cuadrado; Forim, Moacir Rossi; Carneiro, Renato Lajarim

    2018-01-05

    The aim of this work was to quantify B-complex vitamins in pharmaceutical samples by surface enhanced Raman spectroscopy technique using gold colloid substrate. Synthesis of gold nanoparticles was performed according to an adapted Turkevich method. Initial essays were able to suggest the orientation of molecules on gold nanoparticles surface. Central Composite design was performed to obtain the highest SERS signal for nicotinamide and riboflavin. The evaluated parameters in the experimental design were volume of AuNPs, concentration of vitamins and sodium chloride concentration. The best condition for nicotinamide was NaCl 2.3×10 -3 molL -1 and 700μL of AuNPs colloid and this same condition showed to be adequate to quantify thiamine. The experimental design for riboflavin shows the best condition at NaCl 1.15×10 -2 molL -1 and 2.8mL of AuNPs colloid. It was possible to quantify thiamine and nicotinamide in presence of others vitamins and excipients in two solid multivitamin formulations using the standard addition procedure. The standard addition curve presented a R 2 higher than 0.96 for both nicotinamide and thiamine, at orders of magnitude 10 -7 and 10 -8 molL -1 , respectively. The nicotinamide content in a cosmetic gel sample was also quantified by direct analysis presenting R 2 0.98. The t-student test presented no significant difference regarding HPLC method. Despite the experimental design performed for riboflavin, it was not possible its quantification in the commercial samples. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. A comparative approach for the characterization of a pneumatic piston gauge up to 8 MPa using finite element calculations

    NASA Astrophysics Data System (ADS)

    Dogra, Sugandha; Singh, Jasveer; Lodh, Abhishek; Dilawar Sharma, Nita; Bandyopadhyay, A. K.

    2011-02-01

    This paper reports the behavior of a well-characterized pneumatic piston gauge in the pressure range up to 8 MPa through simulation using finite element method (FEM). Experimentally, the effective area of this piston gauge has been estimated by cross-floating to obtain A0 and λ. The FEM technique addresses this problem through simulation and optimization with standard commercial software (ANSYS) where the material properties of the piston and cylinder, dimensional measurements, etc are used as the input parameters. The simulation provides the effective area Ap as a function of pressure in the free deformation mode. From these data, one can estimate Ap versus pressure and thereby Ao and λ. Further, we have carried out a similar theoretical calculation of Ap using the conventional method involving the Dadson's as well as Johnson-Newhall equations. A comparison of these results with the experimental results has been carried out.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, John M.; Onar, Omer C.; Chinthavali, Madhu

    Various noncontacting methods of plug-in electric vehicle charging are either under development or now deployed as aftermarket options in the light-duty automotive market. Wireless power transfer (WPT) is now the accepted term for wireless charging and is used synonymously for inductive power transfer and magnetic resonance coupling. WPT technology is in its infancy; standardization is lacking, especially on interoperability, center frequency selection, magnetic fringe field suppression, and the methods employed for power flow regulation. This paper proposes a new analysis concept for power flow in WPT in which the primary provides frequency selection and the tuned secondary, with its resemblancemore » to a power transmission network having a reactive power voltage control, is analyzed as a transmission network. Analysis is supported with experimental data taken from Oak Ridge National Laboratory s WPT apparatus. Lastly, this paper also provides an experimental evidence for frequency selection, fringe field assessment, and the need for low-latency communications in the feedback path.« less

  1. FTIR, FT-Raman, FT-NMR and quantum chemical investigations of 3-acetylcoumarin

    NASA Astrophysics Data System (ADS)

    Arjunan, V.; Sakiladevi, S.; Marchewka, M. K.; Mohan, S.

    2013-05-01

    3-Acetylcoumarin (3AC) was synthesised by a Knoevenagel reaction. Conformational analysis using the B3LYP method was also carried out to determine the most stable conformation of the compound. FTIR and FT-Raman spectra of 3AC have been recorded in the range 4000-400 and 4000-100 cm-1, respectively. 1H and 13C NMR spectra have also been recorded. The complete vibrational assignment and analysis of the fundamental modes of the compound were carried out using the experimental FTIR and FT-Raman data and quantum mechanical studies. The experimental vibrational frequencies were compared with the wavenumbers obtained theoretically from the DFT-B3LYP/B3PW91 gradient calculations employing the standard 6-31G**, high level 6-311++G** and cc-pVTZ basis sets for optimised geometry of the compound. The frontier molecular orbital energies of the compound are determined by DFT method.

  2. X-ray and neutron diffraction studies of crystallinity in hydroxyapatite coatings.

    PubMed

    Girardin, E; Millet, P; Lodini, A

    2000-02-01

    To standardize industrial implant production and make comparisons between different experimental results, we have to be able to quantify the crystallinity of hydroxyapatite. Methods of measuring crystallinity ratio were developed for various HA samples before and after plasma spraying. The first series of methods uses X-ray diffraction. The advantage of these methods is that X-ray diffraction equipment is used widely in science and industry. In the second series, a neutron diffraction method is developed and the results recorded are similar to those obtained by the modified X-ray diffraction methods. The advantage of neutron diffraction is the ability to obtain measurements deep inside a component. It is a nondestructive method, owing to the very low absorption of neutrons in most materials. Copyright 2000 John Wiley & Sons, Inc.

  3. Shear Resistance between Concrete-Concrete Surfaces

    NASA Astrophysics Data System (ADS)

    Kovačovic, Marek

    2013-12-01

    The application of precast beams and cast-in-situ structural members cast at different times has been typical of bridges and buildings for many years. A load-bearing frame consists of a set of prestressed precast beams supported by columns and diaphragms joined with an additionally cast slab deck. This article is focused on the theoretical and experimental analyses of the shear resistance at an interface. The first part of the paper deals with the state-of-art knowledge of the composite behaviour of concrete-concrete structures and a comparison of the numerical methods introduced in the relevant standards. In the experimental part, a set of specimens with different interface treatments was tested until failure in order to predict the composite behaviour of coupled beams. The experimental part was compared to the numerical analysis performed by means of FEM basis nonlinear software.

  4. Numerical and Experimental Case Study of Blasting Works Effect

    NASA Astrophysics Data System (ADS)

    Papán, Daniel; Valašková, Veronika; Drusa, Marian

    2016-10-01

    This article introduces the theoretical and experimental case study of dynamic monitoring of the geological environment above constructed highway tunnel. The monitored structure is in this case a very important water supply pipeline, which crosses the tunnel and was made from steel tubes with a diameter of 800 mm. The basic dynamic parameters had been monitored during blasting works, and were compared with the FEM (Finite Element Method) calculations and checked by the Slovak standard limits. A calibrated FEM model based on the experimental measurement data results was created and used in order to receive more realistic results in further predictions, time and space extrapolations. This case study was required and demanded by the general contractor company and also by the owner of water pipeline, and it was an answer of public safety evaluation of risks during tunnel construction.

  5. Vibrational spectroscopic studies and DFT calculations of 4-aminoantipyrine

    NASA Astrophysics Data System (ADS)

    Swaminathan, J.; Ramalingam, M.; Sethuraman, V.; Sundaraganesan, N.; Sebastian, S.

    2009-08-01

    The pyrazole derivative, 4-aminoantipyrine (4AAP), used as an intermediate for the synthesis of pharmaceuticals especially antipyretic and analgesic drugs has been analyzed experimentally and theoretically for its vibrational frequencies. The FTIR and FT Raman spectra of the title compound have been compared with the theoretically computed frequencies invoking the standard 6-311g(d,p) and cc-pVDZ basis sets at DFT level of theory (B3LYP). The harmonic vibrational frequencies at B3LYP/cc-pVDZ after appropriate scaling method seem to coincide satisfactorily with the experimental observations rather than B3LYP/6-311g(d,p) results. The theoretical spectrograms for FT-IR and FT-Raman spectra of 4AAP have been also constructed and compared with the experimental spectra. Additionally, thermodynamic data have also been calculated and discussed.

  6. A method for direct, semi-quantitative analysis of gas phase samples using gas chromatography-inductively coupled plasma-mass spectrometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carter, Kimberly E; Gerdes, Kirk

    2013-07-01

    A new and complete GC–ICP-MS method is described for direct analysis of trace metals in a gas phase process stream. The proposed method is derived from standard analytical procedures developed for ICP-MS, which are regularly exercised in standard ICP-MS laboratories. In order to implement the method, a series of empirical factors were generated to calibrate detector response with respect to a known concentration of an internal standard analyte. Calibrated responses are ultimately used to determine the concentration of metal analytes in a gas stream using a semi-quantitative algorithm. The method was verified using a traditional gas injection from a GCmore » sampling valve and a standard gas mixture containing either a 1 ppm Xe + Kr mix with helium balance or 100 ppm Xe with helium balance. Data collected for Xe and Kr gas analytes revealed that agreement of 6–20% with the actual concentration can be expected for various experimental conditions. To demonstrate the method using a relevant “unknown” gas mixture, experiments were performed for continuous 4 and 7 hour periods using a Hg-containing sample gas that was co-introduced into the GC sample loop with the xenon gas standard. System performance and detector response to the dilute concentration of the internal standard were pre-determined, which allowed semi-quantitative evaluation of the analyte. The calculated analyte concentrations varied during the course of the 4 hour experiment, particularly during the first hour of the analysis where the actual Hg concentration was under predicted by up to 72%. Calculated concentration improved to within 30–60% for data collected after the first hour of the experiment. Similar results were seen during the 7 hour test with the deviation from the actual concentration being 11–81% during the first hour and then decreasing for the remaining period. The method detection limit (MDL) was determined for the mercury by injecting the sample gas into the system following a period of equilibration. The MDL for Hg was calculated as 6.8 μg · m -3. This work describes the first complete GC–ICP-MS method to directly analyze gas phase samples, and detailed sample calculations and comparisons to conventional ICP-MS methods are provided.« less

  7. Synthesis and application of surface-imprinted activated carbon sorbent for solid-phase extraction and determination of copper (II)

    NASA Astrophysics Data System (ADS)

    Li, Zhenhua; Li, Jingwen; Wang, Yanbin; Wei, Yajun

    2014-01-01

    A new Cu(II)-imprinted amino-functionalized activated carbon sorbent was prepared by a surface imprinting technique for selective solid-phase extraction (SPE) of Cu(II) prior to its determination by inductively coupled plasma atomic emission spectrometry (ICP-AES). Experimental conditions for effective adsorption of Cu(II) were optimized with respect to different experimental parameters using static and dynamic procedures in detail. Compared with non-imprinted sorbent, the ion-imprinted sorbent had higher selectivity and adsorption capacity for Cu(II). The maximum static adsorption capacity of the ion-imprinted and non-imprinted sorbent for Cu(II) was 26.71 and 6.86 mg g-1, respectively. The relatively selectivity factor values (αr) of Cu(II)/Zn(II), Cu(II)/Ni(II), Cu(II)/Co(II) and Cu(II)/Pb(II) were 166.16, 50.77, 72.26 and 175.77, respectively, which were greater than 1. Complete elution of the adsorbed Cu(II) from Cu(II)-imprinted sorbent was carried out using 2 mL of 0.1 mol L-1 EDTA solution. The relative standard deviation of the method was 2.4% for eleven replicate determinations. The method was validated for the analysis by two certified reference materials (GBW 08301, GBW 08303), the results obtained is in good agreement with standard values. The developed method was also successfully applied to the determination of trace copper in natural water samples with satisfactory results.

  8. Synthesis and application of surface-imprinted activated carbon sorbent for solid-phase extraction and determination of copper (II).

    PubMed

    Li, Zhenhua; Li, Jingwen; Wang, Yanbin; Wei, Yajun

    2014-01-03

    A new Cu(II)-imprinted amino-functionalized activated carbon sorbent was prepared by a surface imprinting technique for selective solid-phase extraction (SPE) of Cu(II) prior to its determination by inductively coupled plasma atomic emission spectrometry (ICP-AES). Experimental conditions for effective adsorption of Cu(II) were optimized with respect to different experimental parameters using static and dynamic procedures in detail. Compared with non-imprinted sorbent, the ion-imprinted sorbent had higher selectivity and adsorption capacity for Cu(II). The maximum static adsorption capacity of the ion-imprinted and non-imprinted sorbent for Cu(II) was 26.71 and 6.86 mg g(-1), respectively. The relatively selectivity factor values (αr) of Cu(II)/Zn(II), Cu(II)/Ni(II), Cu(II)/Co(II) and Cu(II)/Pb(II) were 166.16, 50.77, 72.26 and 175.77, respectively, which were greater than 1. Complete elution of the adsorbed Cu(II) from Cu(II)-imprinted sorbent was carried out using 2 mL of 0.1 mol L(-1) EDTA solution. The relative standard deviation of the method was 2.4% for eleven replicate determinations. The method was validated for the analysis by two certified reference materials (GBW 08301, GBW 08303), the results obtained is in good agreement with standard values. The developed method was also successfully applied to the determination of trace copper in natural water samples with satisfactory results. Copyright © 2013 Elsevier B.V. All rights reserved.

  9. The effect of kangaroo mother care on mental health of mothers with low birth weight infants

    PubMed Central

    Badiee, Zohreh; Faramarzi, Salar; MiriZadeh, Tahereh

    2014-01-01

    Background: The mothers of premature infants are at risk of psychological stress because of separation from their infants. One of the methods influencing the maternal mental health in the postpartum period is kangaroo mother care (KMC). This study was conducted to evaluate the effect of KMC of low birth weight infants on their maternal mental health. Materials and Methods: The study was conducted in the Department of Pediatrics of Isfahan University of Medical Sciences, Isfahan, Iran. Premature infants were randomly allocated into two groups. The control group received standard caring in the incubator. In the experimental group, caring with three sessions of 60 min KMC daily for 1 week was practiced. Mental health scores of the mothers were evaluated by using the 28-item General Health Questionnaire. Statistical analysis was performed by the analysis of covariance using SPSS. Results: The scores of 50 infant-mother pairs were analyzed totally (25 in KMC group and 25 in standard care group). Results of covariance analysis showed the positive effects of KMC on the rate of maternal mental health scores. There were statistically significant differences between the mean scores of the experimental group and control subjects in the posttest period (P < 0.001). Conclusion: KMC for low birth weight infants is a safe way to improve maternal mental health. Therefore, it is suggested as a useful method that can be recommended for improving the mental health of mothers. PMID:25371871

  10. Raw data normalization for a multi source inverse geometry CT system

    PubMed Central

    Baek, Jongduk; De Man, Bruno; Harrison, Daniel; Pelc, Norbert J.

    2015-01-01

    A multi-source inverse-geometry CT (MS-IGCT) system consists of a small 2D detector array and multiple x-ray sources. During data acquisition, each source is activated sequentially, and may have random source intensity fluctuations relative to their respective nominal intensity. While a conventional 3rd generation CT system uses a reference channel to monitor the source intensity fluctuation, the MS-IGCT system source illuminates a small portion of the entire field-of-view (FOV). Therefore, it is difficult for all sources to illuminate the reference channel and the projection data computed by standard normalization using flat field data of each source contains error and can cause significant artifacts. In this work, we present a raw data normalization algorithm to reduce the image artifacts caused by source intensity fluctuation. The proposed method was tested using computer simulations with a uniform water phantom and a Shepp-Logan phantom, and experimental data of an ice-filled PMMA phantom and a rabbit. The effect on image resolution and robustness of the noise were tested using MTF and standard deviation of the reconstructed noise image. With the intensity fluctuation and no correction, reconstructed images from simulation and experimental data show high frequency artifacts and ring artifacts which are removed effectively using the proposed method. It is also observed that the proposed method does not degrade the image resolution and is very robust to the presence of noise. PMID:25837090

  11. An optimized knife-edge method for on-orbit MTF estimation of optical sensors using powell parameter fitting

    NASA Astrophysics Data System (ADS)

    Han, Lu; Gao, Kun; Gong, Chen; Zhu, Zhenyu; Guo, Yue

    2017-08-01

    On-orbit Modulation Transfer Function (MTF) is an important indicator to evaluate the performance of the optical remote sensors in a satellite. There are many methods to estimate MTF, such as pinhole method, slit method and so on. Among them, knife-edge method is quite efficient, easy-to-use and recommended in ISO12233 standard for the wholefrequency MTF curve acquisition. However, the accuracy of the algorithm is affected by Edge Spread Function (ESF) fitting accuracy significantly, which limits the range of application. So in this paper, an optimized knife-edge method using Powell algorithm is proposed to improve the ESF fitting precision. Fermi function model is the most popular ESF fitting model, yet it is vulnerable to the initial values of the parameters. Considering the characteristics of simple and fast convergence, Powell algorithm is applied to fit the accurate parameters adaptively with the insensitivity to the initial parameters. Numerical simulation results reveal the accuracy and robustness of the optimized algorithm under different SNR, edge direction and leaning angles conditions. Experimental results using images of the camera in ZY-3 satellite show that this method is more accurate than the standard knife-edge method of ISO12233 in MTF estimation.

  12. Noise producing toys and the efficacy of product standard criteria to protect health and education outcomes.

    PubMed

    McLaren, Stuart J; Page, Wyatt H; Parker, Lou; Rushton, Martin

    2013-12-19

    An evaluation of 28 commercially available toys imported into New Zealand revealed that 21% of these toys do not meet the acoustic criteria in the ISO standard, ISO 8124-1:2009 Safety of Toys, adopted by Australia and New Zealand as AS/NZS ISO 8124.1:2010. While overall the 2010 standard provided a greater level of protection than the earlier 2002 standard, there was one high risk toy category where the 2002 standard provided greater protection. A secondary set of toys from the personal collections of children known to display atypical methods of play with toys, such as those with autism spectrum disorders (ASD), was part of the evaluation. Only one of these toys cleanly passed the 2010 standard, with the remainder failing or showing a marginal-pass. As there is no tolerance level stated in the standards to account for interpretation of data and experimental error, a value of +2 dB was used. The findings of the study indicate that the current standard is inadequate in providing protection against excessive noise exposure. Amendments to the criteria have been recommended that apply to the recently adopted 2013 standard. These include the integration of the new approaches published in the recently amended European standard (EN 71) on safety of toys.

  13. Noise Producing Toys and the Efficacy of Product Standard Criteria to Protect Health and Education Outcomes

    PubMed Central

    McLaren, Stuart J.; Page, Wyatt H.; Parker, Lou; Rushton, Martin

    2013-01-01

    An evaluation of 28 commercially available toys imported into New Zealand revealed that 21% of these toys do not meet the acoustic criteria in the ISO standard, ISO 8124-1:2009 Safety of Toys, adopted by Australia and New Zealand as AS/NZS ISO 8124.1:2010. While overall the 2010 standard provided a greater level of protection than the earlier 2002 standard, there was one high risk toy category where the 2002 standard provided greater protection. A secondary set of toys from the personal collections of children known to display atypical methods of play with toys, such as those with autism spectrum disorders (ASD), was part of the evaluation. Only one of these toys cleanly passed the 2010 standard, with the remainder failing or showing a marginal-pass. As there is no tolerance level stated in the standards to account for interpretation of data and experimental error, a value of +2 dB was used. The findings of the study indicate that the current standard is inadequate in providing protection against excessive noise exposure. Amendments to the criteria have been recommended that apply to the recently adopted 2013 standard. These include the integration of the new approaches published in the recently amended European standard (EN 71) on safety of toys. PMID:24452254

  14. Recommended operating procedure No. 51: Glass source assessment sampling system (glass SASS). Final report, Jul 90-Jan 91

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grote, R.A.

    1991-05-01

    The report is a recommended operating procedure (ROP), prepared for use in research activities conducted by EPA's Air and Energy Engineering Research Laboratory (AEERL). The method described is applicable to the stack sampling of flue gas from a rotary kiln and to associated equipment of AEERL's Combustion Research Branch. It has been the standard method of sampling kiln flue gas due to the transient nature of the puff development and its capability to sample the maximum volume over the shortest time period. ROPs describe non-routine or experimental research operations where some judgment in application may be warranted. ROPs may notmore » be applicable to activities conducted by other research groups, and should not be used in place of standard operating procedures. Use of ROPs must be accompanied by an understanding of the purpose and scope. Questions should be directed to the author.« less

  15. Measuring The Neutron Lifetime to One Second Using in Beam Techniques

    NASA Astrophysics Data System (ADS)

    Mulholland, Jonathan; NIST In Beam Lifetime Collaboration

    2013-10-01

    The decay of the free neutron is the simplest nuclear beta decay and is the prototype for charged current semi-leptonic weak interactions. A precise value for the neutron lifetime is required for consistency tests of the Standard Model and is an essential parameter in the theory of Big Bang Nucleosynthesis. A new measurement of the neutron lifetime using the in-beam method is planned at the National Institute of Standards and Technology Center for Neutron Research. The systematic effects associated with the in-beam method are markedly different than those found in storage experiments utilizing ultracold neutrons. Experimental improvements, specifically recent advances in the determination of absolute neutron fluence, should permit an overall uncertainty of 1 second on the neutron lifetime. The technical improvements in the in-beam technique, and the path toward improving the precision of the new measurement will be discussed.

  16. Measurement of erythrocyte deformability by two laser diffraction methods.

    PubMed

    Wang, X; Zhao, H; Zhuang, F Y; Stoltz, J F

    1999-01-01

    The aim of this work is to study the deformability of red blood cells (RBC) by two laser diffraction methods: the Laser-assisted Optical Rotational Cell Analyser (LORCA, Mechatronics, Amsterdam, Netherlands) and a Shear Stress Diffractometer (RHEODYN SSD, Myrenne, Roetgen, Germany). Experiments were carried out on 46 healthy human subjects. The elongation index EI of normal and hardened RBCs (obtained by heating blood at 49 degrees C or by incubating RBCs in solutions of diamide) was measured. The results showed that the standard deviations of the experimental data for normal RBCs were relatively small, especially at high shear stresses (more than 3.0 Pa), but higher than those reported before. Some correlations between the results given by the two instruments were also found. It should be noted that for hardened RBCs, the standard deviations of the measurements were important compared with the mean values in the two instruments.

  17. Iterative algorithms for a non-linear inverse problem in atmospheric lidar

    NASA Astrophysics Data System (ADS)

    Denevi, Giulia; Garbarino, Sara; Sorrentino, Alberto

    2017-08-01

    We consider the inverse problem of retrieving aerosol extinction coefficients from Raman lidar measurements. In this problem the unknown and the data are related through the exponential of a linear operator, the unknown is non-negative and the data follow the Poisson distribution. Standard methods work on the log-transformed data and solve the resulting linear inverse problem, but neglect to take into account the noise statistics. In this study we show that proper modelling of the noise distribution can improve substantially the quality of the reconstructed extinction profiles. To achieve this goal, we consider the non-linear inverse problem with non-negativity constraint, and propose two iterative algorithms derived using the Karush-Kuhn-Tucker conditions. We validate the algorithms with synthetic and experimental data. As expected, the proposed algorithms out-perform standard methods in terms of sensitivity to noise and reliability of the estimated profile.

  18. Effects of Aggregation on Blood Sedimentation and Conductivity

    PubMed Central

    Zhbanov, Alexander; Yang, Sung

    2015-01-01

    The erythrocyte sedimentation rate (ESR) test has been used for over a century. The Westergren method is routinely used in a variety of clinics. However, the mechanism of erythrocyte sedimentation remains unclear, and the 60 min required for the test seems excessive. We investigated the effects of cell aggregation during blood sedimentation and electrical conductivity at different hematocrits. A sample of blood was drop cast into a small chamber with two planar electrodes placed on the bottom. The measured blood conductivity increased slightly during the first minute and decreased thereafter. We explored various methods of enhancing or retarding the erythrocyte aggregation. Using experimental measurements and theoretical calculations, we show that the initial increase in blood conductivity was indeed caused by aggregation, while the subsequent decrease in conductivity resulted from the deposition of erythrocytes. We present a method for calculating blood conductivity based on effective medium theory. Erythrocytes are modeled as conducting spheroids surrounded by a thin insulating membrane. A digital camera was used to investigate the erythrocyte sedimentation behavior and the distribution of the cell volume fraction in a capillary tube. Experimental observations and theoretical estimations of the settling velocity are provided. We experimentally demonstrate that the disaggregated cells settle much slower than the aggregated cells. We show that our method of measuring the electrical conductivity credibly reflected the ESR. The method was very sensitive to the initial stage of aggregation and sedimentation, while the sedimentation curve for the Westergren ESR test has a very mild slope in the initial time. We tested our method for rapid estimation of the Westergren ESR. We show a correlation between our method of measuring changes in blood conductivity and standard Westergren ESR method. In the future, our method could be examined as a potential means of accelerating ESR tests in clinical practice. PMID:26047511

  19. Experimental Study on Welded Headed Studs Used In Steel Plate-Concrete Composite Structures Compared with Contactless Method of Measuring Displacement

    NASA Astrophysics Data System (ADS)

    Kisała, Dawid; Tekieli, Marcin

    2017-10-01

    Steel plate-concrete composite structures are a new innovative design concept in which a thin steel plate is attached to the reinforced concrete beam by means of welded headed studs. The comparison between experimental studies and theoretical analysis of this type of structures shows that their behaviour is dependent on the load-slip relationship of the shear connectors used to ensure sufficient bond between the concrete and steel parts of the structure. The aim of this paper is to describe an experimental study on headed studs used in steel plate-concrete composite structures. Push-out tests were carried out to investigate the behaviour of shear connectors. The test specimens were prepared according to standard push-out tests, however, instead of I-beam, a steel plate 16 mm thick was used to better reflect the conditions in the real structure. The test specimens were produced in two batches using concrete with significantly different compressive strength. The experimental study was carried out on twelve specimens. Besides the traditional measurements based on LVDT sensors, optical measurements based on the digital image correlation method (DIC) and pattern tracking methods were used. DIC is a full-field contactless optical method for measuring displacements in experimental testing, based on the correlation of the digital images taken during test execution. With respect to conventional methods, optical measurements offer a wider scope of results and can give more information about the material or construction behaviour during the test. The ultimate load capacity and load-slip curves obtained from the experiments were compared with the values calculated based on Eurocodes, American and Chinese design specifications. It was observed that the use of the relationships developed for the traditional steel-concrete composite structures is justified in the case of ultimate load capacity of shear connectors in steel plate-concrete composite structures.

  20. Experimental Impedance of Single Liner Elements with Bias Flow

    NASA Technical Reports Server (NTRS)

    Follet, J. I.; Betts, J. F.; Kelly, Jeffrey J.; Thomas, Russell H.

    2000-01-01

    An experimental investigation was conducted to generate a high quality database, from which the effects of a mean bias flow on the acoustic impedance of lumped-element single-degree-of-freedom liners was determined. Acoustic impedance measurements were made using the standard two-microphone method in the NASA Langley Normal Incidence Tube. Each liner consisted of a perforated sheet with a constant-area cavity. Liner resistance was shown to increase and to become less frequency and sound pressure level dependent as the bias flow was increased. The resistance was also consistently lower for a negative bias flow (suction) than for a positive bias flow (blowing) of equal magnitude. The slope of the liner reactance decreased with increased flow.

  1. Standards for data acquisition and software-based analysis of in vivo electroencephalography recordings from animals. A TASK1-WG5 report of the AES/ILAE Translational Task Force of the ILAE.

    PubMed

    Moyer, Jason T; Gnatkovsky, Vadym; Ono, Tomonori; Otáhal, Jakub; Wagenaar, Joost; Stacey, William C; Noebels, Jeffrey; Ikeda, Akio; Staley, Kevin; de Curtis, Marco; Litt, Brian; Galanopoulou, Aristea S

    2017-11-01

    Electroencephalography (EEG)-the direct recording of the electrical activity of populations of neurons-is a tremendously important tool for diagnosing, treating, and researching epilepsy. Although standard procedures for recording and analyzing human EEG exist and are broadly accepted, there are no such standards for research in animal models of seizures and epilepsy-recording montages, acquisition systems, and processing algorithms may differ substantially among investigators and laboratories. The lack of standard procedures for acquiring and analyzing EEG from animal models of epilepsy hinders the interpretation of experimental results and reduces the ability of the scientific community to efficiently translate new experimental findings into clinical practice. Accordingly, the intention of this report is twofold: (1) to review current techniques for the collection and software-based analysis of neural field recordings in animal models of epilepsy, and (2) to offer pertinent standards and reporting guidelines for this research. Specifically, we review current techniques for signal acquisition, signal conditioning, signal processing, data storage, and data sharing, and include applicable recommendations to standardize collection and reporting. We close with a discussion of challenges and future opportunities, and include a supplemental report of currently available acquisition systems and analysis tools. This work represents a collaboration on behalf of the American Epilepsy Society/International League Against Epilepsy (AES/ILAE) Translational Task Force (TASK1-Workgroup 5), and is part of a larger effort to harmonize video-EEG interpretation and analysis methods across studies using in vivo and in vitro seizure and epilepsy models. Wiley Periodicals, Inc. © 2017 International League Against Epilepsy.

  2. Phased-array vector velocity estimation using transverse oscillations.

    PubMed

    Pihl, Michael J; Marcher, Jonne; Jensen, Jorgen A

    2012-12-01

    A method for estimating the 2-D vector velocity of blood using a phased-array transducer is presented. The approach is based on the transverse oscillation (TO) method. The purposes of this work are to expand the TO method to a phased-array geometry and to broaden the potential clinical applicability of the method. A phased-array transducer has a smaller footprint and a larger field of view than a linear array, and is therefore more suited for, e.g., cardiac imaging. The method relies on suitable TO fields, and a beamforming strategy employing diverging TO beams is proposed. The implementation of the TO method using a phased-array transducer for vector velocity estimation is evaluated through simulation and flow-rig measurements are acquired using an experimental scanner. The vast number of calculations needed to perform flow simulations makes the optimization of the TO fields a cumbersome process. Therefore, three performance metrics are proposed. They are calculated based on the complex TO spectrum of the combined TO fields. It is hypothesized that the performance metrics are related to the performance of the velocity estimates. The simulations show that the squared correlation values range from 0.79 to 0.92, indicating a correlation between the performance metrics of the TO spectrum and the velocity estimates. Because these performance metrics are much more readily computed, the TO fields can be optimized faster for improved velocity estimation of both simulations and measurements. For simulations of a parabolic flow at a depth of 10 cm, a relative (to the peak velocity) bias and standard deviation of 4% and 8%, respectively, are obtained. Overall, the simulations show that the TO method implemented on a phased-array transducer is robust with relative standard deviations around 10% in most cases. The flow-rig measurements show similar results. At a depth of 9.5 cm using 32 emissions per estimate, the relative standard deviation is 9% and the relative bias is -9%. At the center of the vessel, the velocity magnitude is estimated to be 0.25 ± 0.023 m/s, compared with an expected peak velocity magnitude of 0.25 m/s, and the beam-to-flow angle is calculated to be 89.3° ± 0.77°, compared with an expected angle value between 89° and 90°. For steering angles up to ±20° degrees, the relative standard deviation is less than 20%. The results also show that a 64-element transducer implementation is feasible, but with a poorer performance compared with a 128-element transducer. The simulation and experimental results demonstrate that the TO method is suitable for use in conjunction with a phased-array transducer, and that 2-D vector velocity estimation is possible down to a depth of 15 cm.

  3. Minimum Information about a Spinal Cord Injury Experiment: A Proposed Reporting Standard for Spinal Cord Injury Experiments

    PubMed Central

    Ferguson, Adam R.; Popovich, Phillip G.; Xu, Xiao-Ming; Snow, Diane M.; Igarashi, Michihiro; Beattie, Christine E.; Bixby, John L.

    2014-01-01

    Abstract The lack of reproducibility in many areas of experimental science has a number of causes, including a lack of transparency and precision in the description of experimental approaches. This has far-reaching consequences, including wasted resources and slowing of progress. Additionally, the large number of laboratories around the world publishing articles on a given topic make it difficult, if not impossible, for individual researchers to read all of the relevant literature. Consequently, centralized databases are needed to facilitate the generation of new hypotheses for testing. One strategy to improve transparency in experimental description, and to allow the development of frameworks for computer-readable knowledge repositories, is the adoption of uniform reporting standards, such as common data elements (data elements used in multiple clinical studies) and minimum information standards. This article describes a minimum information standard for spinal cord injury (SCI) experiments, its major elements, and the approaches used to develop it. Transparent reporting standards for experiments using animal models of human SCI aim to reduce inherent bias and increase experimental value. PMID:24870067

  4. [Histochemical stains for minerals by hematoxylin-lake method].

    PubMed

    Miyagawa, Makoto

    2013-04-01

    The present study was undertaken to establish the experimental animal model by histological staining methods for minerals. After intraperitoneal injections of minerals, precipitates deposited on the surface of the liver. Liver tissues were fixed in paraformaldehyde, embedded in paraffin and cut into thin sections which were used as minerals containing standard section. Several reagents for histological stains and spectrophotometry for minerals were applied in both test-tube experiments and stainings of tissue sections to test for minerals. Hematoxylin-lake was found of capable of staining minerals in tissue. A simple technique used was described for light microscopic detection of minerals.

  5. Portable emittance measurement device

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liakin, D.; Seleznev, D.; Orlov, A.

    2010-02-15

    In Institute for Theoretical and Experimental Physics (ITEP) the portable emittance measurements device is developed. It provides emittance measurements both with ''pepper-pot'' and ''two slits'' methods. Depending on the method of measurements, either slits or pepper-pot mask with scintillator are mounted on the two activators and are installed in two standard Balzer's cross chamber with CF-100 flanges. To match the angle resolution for measured beam, the length of the stainless steel pipe between two crosses changes is adjusted. The description of the device and results of emittance measurements at the ITEP ion source test bench are presented.

  6. The results of determining the gravity potential difference on the measurement of the relativistic frequency shift of the mobile frequency standard

    NASA Astrophysics Data System (ADS)

    Gienko, Elena; Kanushin, Vadim; Tolstikov, Alexander; Karpik, Alexander; Kosarev, Nikolay; Ganagina, Irina

    2016-04-01

    In 2015 in the research on the grant of the Russian science Foundation No. 14-27-00068 was experimentally confirmed the possibility of measuring the gravity potential difference on relativistic frequency shift of the mobile hydrogen standard CH1-1006 (relative frequency instability of the order 10E-14). Hydrogen frequency standard CH1-1006 was calibrated in the system of secondary standard WET 1-19 (SNIIM, Novosibirsk, Russia) and transported to the place of experiment (a distance of 550 km, the Russian Federation, Republic of Altai), where it moved between the measured points at a distance of 35 km with a height difference of 850 meters. To synchronize spatially separated standard CH1-1006 and secondary standard WET 1-19 was applied the method "CommonView", based on the processing results of pseudorange phase GNSS measurements at the point of placement hours. Changing the frequency standard CH1-1006, measured in the system of secondary standard WET 1-19 and associated with his movement between points and the change of gravitational potential, was equal to 7.98•10E-14. Evaluation of root-mean-square two-sample frequency deviation of the standard at the time interval of the experiment was equal to the value of 7.27•10E-15. To control the results of the frequency determination of the gravity potential difference between the points were made high precision gravimetric measurements with an error of 6 MkGal and GNSS measurements for the coordinate determinations in ITRF2008 with an accuracy of 2-5 cm. The difference between the results of the frequency determination of the gravity potential difference with control data from GNSS and gravimetric measurements was estimated 16% of the total value that corresponds to the error of frequency measurement in the experiment. The possibility of using a single moveable frequency standard to determine the gravity potential difference at spaced points using the method of "CommonView", without the use of optical communications between base and mobile frequency standards was shown. Future improvement in engineering the frequency standards and the measurement technique, developed in the course of our experiments, will allow developing one of the most promising areas of relativistic geodesy - autonomous measurement of heights in a common world system which is currently a vitally important problem of geodesy. We got the practical results, which offer further opportunities for more accurate planning of experimental research and the creation of a global relativistic geoid for the formation of a unified global system of heights.

  7. A Fast, Efficient Domain Adaptation Technique for Cross-Domain Electroencephalography(EEG)-Based Emotion Recognition

    PubMed Central

    Chai, Xin; Wang, Qisong; Zhao, Yongping; Li, Yongqiang; Liu, Dan; Liu, Xin; Bai, Ou

    2017-01-01

    Electroencephalography (EEG)-based emotion recognition is an important element in psychiatric health diagnosis for patients. However, the underlying EEG sensor signals are always non-stationary if they are sampled from different experimental sessions or subjects. This results in the deterioration of the classification performance. Domain adaptation methods offer an effective way to reduce the discrepancy of marginal distribution. However, for EEG sensor signals, both marginal and conditional distributions may be mismatched. In addition, the existing domain adaptation strategies always require a high level of additional computation. To address this problem, a novel strategy named adaptive subspace feature matching (ASFM) is proposed in this paper in order to integrate both the marginal and conditional distributions within a unified framework (without any labeled samples from target subjects). Specifically, we develop a linear transformation function which matches the marginal distributions of the source and target subspaces without a regularization term. This significantly decreases the time complexity of our domain adaptation procedure. As a result, both marginal and conditional distribution discrepancies between the source domain and unlabeled target domain can be reduced, and logistic regression (LR) can be applied to the new source domain in order to train a classifier for use in the target domain, since the aligned source domain follows a distribution which is similar to that of the target domain. We compare our ASFM method with six typical approaches using a public EEG dataset with three affective states: positive, neutral, and negative. Both offline and online evaluations were performed. The subject-to-subject offline experimental results demonstrate that our component achieves a mean accuracy and standard deviation of 80.46% and 6.84%, respectively, as compared with a state-of-the-art method, the subspace alignment auto-encoder (SAAE), which achieves values of 77.88% and 7.33% on average, respectively. For the online analysis, the average classification accuracy and standard deviation of ASFM in the subject-to-subject evaluation for all the 15 subjects in a dataset was 75.11% and 7.65%, respectively, gaining a significant performance improvement compared to the best baseline LR which achieves 56.38% and 7.48%, respectively. The experimental results confirm the effectiveness of the proposed method relative to state-of-the-art methods. Moreover, computational efficiency of the proposed ASFM method is much better than standard domain adaptation; if the numbers of training samples and test samples are controlled within certain range, it is suitable for real-time classification. It can be concluded that ASFM is a useful and effective tool for decreasing domain discrepancy and reducing performance degradation across subjects and sessions in the field of EEG-based emotion recognition. PMID:28467371

  8. A Fast, Efficient Domain Adaptation Technique for Cross-Domain Electroencephalography(EEG)-Based Emotion Recognition.

    PubMed

    Chai, Xin; Wang, Qisong; Zhao, Yongping; Li, Yongqiang; Liu, Dan; Liu, Xin; Bai, Ou

    2017-05-03

    Electroencephalography (EEG)-based emotion recognition is an important element in psychiatric health diagnosis for patients. However, the underlying EEG sensor signals are always non-stationary if they are sampled from different experimental sessions or subjects. This results in the deterioration of the classification performance. Domain adaptation methods offer an effective way to reduce the discrepancy of marginal distribution. However, for EEG sensor signals, both marginal and conditional distributions may be mismatched. In addition, the existing domain adaptation strategies always require a high level of additional computation. To address this problem, a novel strategy named adaptive subspace feature matching (ASFM) is proposed in this paper in order to integrate both the marginal and conditional distributions within a unified framework (without any labeled samples from target subjects). Specifically, we develop a linear transformation function which matches the marginal distributions of the source and target subspaces without a regularization term. This significantly decreases the time complexity of our domain adaptation procedure. As a result, both marginal and conditional distribution discrepancies between the source domain and unlabeled target domain can be reduced, and logistic regression (LR) can be applied to the new source domain in order to train a classifier for use in the target domain, since the aligned source domain follows a distribution which is similar to that of the target domain. We compare our ASFM method with six typical approaches using a public EEG dataset with three affective states: positive, neutral, and negative. Both offline and online evaluations were performed. The subject-to-subject offline experimental results demonstrate that our component achieves a mean accuracy and standard deviation of 80.46% and 6.84%, respectively, as compared with a state-of-the-art method, the subspace alignment auto-encoder (SAAE), which achieves values of 77.88% and 7.33% on average, respectively. For the online analysis, the average classification accuracy and standard deviation of ASFM in the subject-to-subject evaluation for all the 15 subjects in a dataset was 75.11% and 7.65%, respectively, gaining a significant performance improvement compared to the best baseline LR which achieves 56.38% and 7.48%, respectively. The experimental results confirm the effectiveness of the proposed method relative to state-of-the-art methods. Moreover, computational efficiency of the proposed ASFM method is much better than standard domain adaptation; if the numbers of training samples and test samples are controlled within certain range, it is suitable for real-time classification. It can be concluded that ASFM is a useful and effective tool for decreasing domain discrepancy and reducing performance degradation across subjects and sessions in the field of EEG-based emotion recognition.

  9. Data Processing System (DPS) software with experimental design, statistical analysis and data mining developed for use in entomological research.

    PubMed

    Tang, Qi-Yi; Zhang, Chuan-Xi

    2013-04-01

    A comprehensive but simple-to-use software package called DPS (Data Processing System) has been developed to execute a range of standard numerical analyses and operations used in experimental design, statistics and data mining. This program runs on standard Windows computers. Many of the functions are specific to entomological and other biological research and are not found in standard statistical software. This paper presents applications of DPS to experimental design, statistical analysis and data mining in entomology. © 2012 The Authors Insect Science © 2012 Institute of Zoology, Chinese Academy of Sciences.

  10. A simple, rapid and sensitive RP-HPLC-UV method for the simultaneous determination of sorafenib & paclitaxel in plasma and pharmaceutical dosage forms: Application to pharmacokinetic study.

    PubMed

    Khan, Ismail; Iqbal, Zafar; Khan, Abad; Hassan, Muhammad; Nasir, Fazle; Raza, Abida; Ahmad, Lateef; Khan, Amjad; Akhlaq Mughal, Muhammad

    2016-10-15

    A simple, economical, fast, and sensitive RP-HPLC-UV method has been developed for the simultaneous quantification of Sorafenib and paclitaxel in biological samples and formulations using piroxicam as an internal standard. The experimental conditions were optimized and method was validated according to the standard guidelines. The separation of both the analytes and internal standard was achieved on Discovery HS C18 column (250mm×4.6mm, 5μm) using Acetonitrile and TFA (0.025%) in the ratio of (65:35V/V) as the mobile phase in isocratic mode at a flow rate of 1ml/min, with a wavelength of 245nm and at a column oven temperature of 25°Cin a short run time of 12min. The limits of detection (LLOD) were 5 and 10ng/ml while the limits of quantification (LLOQ) were 10 and 15ng/ml for sorafenib and paclitaxel, respectively. Sorafenib, paclitaxel and piroxicam (IS) were extracted from biological samples by applying acetonitrile as a precipitating and extraction solvent. The method is linear in the range of 15-20,000ng/ml for paclitaxel and 10-5000ng/ml for sorafenib, respectively. The method is sensitive and reliable by considering both of its intra-day and inter-day co-efficient of variance. The method was successfully applied for the quantification of the above mentioned drugs in plasma. The developed method will be applied towards sorafenib and paclitaxel pharmacokinetics studies in animal models. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. An Improved Method of Predicting Extinction Coefficients for the Determination of Protein Concentration.

    PubMed

    Hilario, Eric C; Stern, Alan; Wang, Charlie H; Vargas, Yenny W; Morgan, Charles J; Swartz, Trevor E; Patapoff, Thomas W

    2017-01-01

    Concentration determination is an important method of protein characterization required in the development of protein therapeutics. There are many known methods for determining the concentration of a protein solution, but the easiest to implement in a manufacturing setting is absorption spectroscopy in the ultraviolet region. For typical proteins composed of the standard amino acids, absorption at wavelengths near 280 nm is due to the three amino acid chromophores tryptophan, tyrosine, and phenylalanine in addition to a contribution from disulfide bonds. According to the Beer-Lambert law, absorbance is proportional to concentration and path length, with the proportionality constant being the extinction coefficient. Typically the extinction coefficient of proteins is experimentally determined by measuring a solution absorbance then experimentally determining the concentration, a measurement with some inherent variability depending on the method used. In this study, extinction coefficients were calculated based on the measured absorbance of model compounds of the four amino acid chromophores. These calculated values for an unfolded protein were then compared with an experimental concentration determination based on enzymatic digestion of proteins. The experimentally determined extinction coefficient for the native proteins was consistently found to be 1.05 times the calculated value for the unfolded proteins for a wide range of proteins with good accuracy and precision under well-controlled experimental conditions. The value of 1.05 times the calculated value was termed the predicted extinction coefficient. Statistical analysis shows that the differences between predicted and experimentally determined coefficients are scattered randomly, indicating no systematic bias between the values among the proteins measured. The predicted extinction coefficient was found to be accurate and not subject to the inherent variability of experimental methods. We propose the use of a predicted extinction coefficient for determining the protein concentration of therapeutic proteins starting from early development through the lifecycle of the product. LAY ABSTRACT: Knowing the concentration of a protein in a pharmaceutical solution is important to the drug's development and posology. There are many ways to determine the concentration, but the easiest one to use in a testing lab employs absorption spectroscopy. Absorbance of ultraviolet light by a protein solution is proportional to its concentration and path length; the proportionality constant is the extinction coefficient. The extinction coefficient of a protein therapeutic is usually determined experimentally during early product development and has some inherent method variability. In this study, extinction coefficients of several proteins were calculated based on the measured absorbance of model compounds. These calculated values for an unfolded protein were then compared with experimental concentration determinations based on enzymatic digestion of the proteins. The experimentally determined extinction coefficient for the native protein was 1.05 times the calculated value for the unfolded protein with good accuracy and precision under controlled experimental conditions, so the value of 1.05 times the calculated coefficient was called the predicted extinction coefficient. Comparison of predicted and measured extinction coefficients indicated that the predicted value was very close to the experimentally determined values for the proteins. The predicted extinction coefficient was accurate and removed the variability inherent in experimental methods. © PDA, Inc. 2017.

  12. First-Principles Lattice Dynamics Method for Strongly Anharmonic Crystals

    NASA Astrophysics Data System (ADS)

    Tadano, Terumasa; Tsuneyuki, Shinji

    2018-04-01

    We review our recent development of a first-principles lattice dynamics method that can treat anharmonic effects nonperturbatively. The method is based on the self-consistent phonon theory, and temperature-dependent phonon frequencies can be calculated efficiently by incorporating recent numerical techniques to estimate anharmonic force constants. The validity of our approach is demonstrated through applications to cubic strontium titanate, where overall good agreement with experimental data is obtained for phonon frequencies and lattice thermal conductivity. We also show the feasibility of highly accurate calculations based on a hybrid exchange-correlation functional within the present framework. Our method provides a new way of studying lattice dynamics in severely anharmonic materials where the standard harmonic approximation and the perturbative approach break down.

  13. HMMBinder: DNA-Binding Protein Prediction Using HMM Profile Based Features.

    PubMed

    Zaman, Rianon; Chowdhury, Shahana Yasmin; Rashid, Mahmood A; Sharma, Alok; Dehzangi, Abdollah; Shatabda, Swakkhar

    2017-01-01

    DNA-binding proteins often play important role in various processes within the cell. Over the last decade, a wide range of classification algorithms and feature extraction techniques have been used to solve this problem. In this paper, we propose a novel DNA-binding protein prediction method called HMMBinder. HMMBinder uses monogram and bigram features extracted from the HMM profiles of the protein sequences. To the best of our knowledge, this is the first application of HMM profile based features for the DNA-binding protein prediction problem. We applied Support Vector Machines (SVM) as a classification technique in HMMBinder. Our method was tested on standard benchmark datasets. We experimentally show that our method outperforms the state-of-the-art methods found in the literature.

  14. Influence analysis in quantitative trait loci detection.

    PubMed

    Dou, Xiaoling; Kuriki, Satoshi; Maeno, Akiteru; Takada, Toyoyuki; Shiroishi, Toshihiko

    2014-07-01

    This paper presents systematic methods for the detection of influential individuals that affect the log odds (LOD) score curve. We derive general formulas of influence functions for profile likelihoods and introduce them into two standard quantitative trait locus detection methods-the interval mapping method and single marker analysis. Besides influence analysis on specific LOD scores, we also develop influence analysis methods on the shape of the LOD score curves. A simulation-based method is proposed to assess the significance of the influence of the individuals. These methods are shown useful in the influence analysis of a real dataset of an experimental population from an F2 mouse cross. By receiver operating characteristic analysis, we confirm that the proposed methods show better performance than existing diagnostics. © 2014 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Development and single-laboratory validation of an HPLC method for the determination of cyclamate sweetener in foodstuffs.

    PubMed

    Scotter, M J; Castle, L; Roberts, D P T; Macarthur, R; Brereton, P A; Hasnip, S K; Katz, N

    2009-05-01

    A method for the determination of cyclamate has been developed and single-laboratory validated for a range of foodstuffs including carbonated and fruit-juice drinks, fruit preserves, spreads, and dairy desserts. The method uses the peroxide oxidation of cyclamate to cyclohexylamine followed by derivatization with trinitrobenzenesulfonic acid and analysis by a modified reversed-phase high-performance liquid chromatography-ultraviolet light (HPLC-UV). Cycloheptylamine is used as an internal standard. The limits of detection were in the range 1-20 mg kg(-1) and the analysis was linear up to 1300 mg kg(-1) cyclamic acid in foods and up to 67 mg l(-1) in beverages. Analytical recovery was between 82% and 123%, and results were recovery corrected. Precision was within experimentally predicted levels for all of the matrices tested and Horrat values for the combined standard uncertainty associated with the measurement of cyclamate between 0.4 (water-based drinks) and 1.7 (spreads). The method was used successfully to test three soft drink samples for homogeneity before analytical performance assessment. The method is recommended for use in monitoring compliance and for formal testing by collaborative trial.

  16. Empirical evaluation of humpback whale telomere length estimates; quality control and factors causing variability in the singleplex and multiplex qPCR methods.

    PubMed

    Olsen, Morten Tange; Bérubé, Martine; Robbins, Jooke; Palsbøll, Per J

    2012-09-06

    Telomeres, the protective cap of chromosomes, have emerged as powerful markers of biological age and life history in model and non-model species. The qPCR method for telomere length estimation is one of the most common methods for telomere length estimation, but has received recent critique for being too error-prone and yielding unreliable results. This critique coincides with an increasing awareness of the potentials and limitations of the qPCR technique in general and the proposal of a general set of guidelines (MIQE) for standardization of experimental, analytical, and reporting steps of qPCR. In order to evaluate the utility of the qPCR method for telomere length estimation in non-model species, we carried out four different qPCR assays directed at humpback whale telomeres, and subsequently performed a rigorous quality control to evaluate the performance of each assay. Performance differed substantially among assays and only one assay was found useful for telomere length estimation in humpback whales. The most notable factors causing these inter-assay differences were primer design and choice of using singleplex or multiplex assays. Inferred amplification efficiencies differed by up to 40% depending on assay and quantification method, however this variation only affected telomere length estimates in the worst performing assays. Our results suggest that seemingly well performing qPCR assays may contain biases that will only be detected by extensive quality control. Moreover, we show that the qPCR method for telomere length estimation can be highly precise and accurate, and thus suitable for telomere measurement in non-model species, if effort is devoted to optimization at all experimental and analytical steps. We conclude by highlighting a set of quality controls which may serve for further standardization of the qPCR method for telomere length estimation, and discuss some of the factors that may cause variation in qPCR experiments.

  17. Empirical evaluation of humpback whale telomere length estimates; quality control and factors causing variability in the singleplex and multiplex qPCR methods

    PubMed Central

    2012-01-01

    Background Telomeres, the protective cap of chromosomes, have emerged as powerful markers of biological age and life history in model and non-model species. The qPCR method for telomere length estimation is one of the most common methods for telomere length estimation, but has received recent critique for being too error-prone and yielding unreliable results. This critique coincides with an increasing awareness of the potentials and limitations of the qPCR technique in general and the proposal of a general set of guidelines (MIQE) for standardization of experimental, analytical, and reporting steps of qPCR. In order to evaluate the utility of the qPCR method for telomere length estimation in non-model species, we carried out four different qPCR assays directed at humpback whale telomeres, and subsequently performed a rigorous quality control to evaluate the performance of each assay. Results Performance differed substantially among assays and only one assay was found useful for telomere length estimation in humpback whales. The most notable factors causing these inter-assay differences were primer design and choice of using singleplex or multiplex assays. Inferred amplification efficiencies differed by up to 40% depending on assay and quantification method, however this variation only affected telomere length estimates in the worst performing assays. Conclusion Our results suggest that seemingly well performing qPCR assays may contain biases that will only be detected by extensive quality control. Moreover, we show that the qPCR method for telomere length estimation can be highly precise and accurate, and thus suitable for telomere measurement in non-model species, if effort is devoted to optimization at all experimental and analytical steps. We conclude by highlighting a set of quality controls which may serve for further standardization of the qPCR method for telomere length estimation, and discuss some of the factors that may cause variation in qPCR experiments. PMID:22954451

  18. Integrated Experimental and Numerical Research on the Aerodynamics of Unsteady Moving Aircraft

    DTIC Science & Technology

    2007-06-01

    blended wing body configuration were tested in different modes of oscillatory motions roll, pitch and yaw as well as delta wing geometries like X-31...airplane configurations (e.g. wide body, green aircraft, blended wing body) the approach up to now using semi-empirical methods as standard...cross section wing. In order to evaluate the influence of individual components of the tested airplane configuration, such as winglets , vertical or

  19. CJEP will offer open science badges.

    PubMed

    Pexman, Penny M

    2017-03-01

    This editorial announces the decision of the Canadian Journal of Experimental Psychology (CJEP) to offer Open Science Framework (OSF) Badges. The Centre for Open Science provides tools to facilitate open science practices. These include the OSF badges. The badges acknowledge papers that meet standards for openness of data, methods, or research process. They are now described in the CJEP Submission Guidelines, and are provided in the editorial. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  20. Time Course of Recovery Following Nitrous Oxide/Oxygen Administration1

    PubMed Central

    Herwig, Larry D.; Milam, Stephen B.; Jones, Daniel L.

    1984-01-01

    The time course of recovery following a brief exposure to 50% N2O in O2 was assessed using a standard psychomotor test, a subjective ranking of experimental pain, and somatosensory evoked potential recordings. Results of this study suggest that recovery from a brief N2O exposure may be prolonged and conventional methods of assessing recovery from CNS active drugs like N2O may be inadequate. PMID:6591845

  1. Performance evaluation of a mobile satellite system modem using an ALE method

    NASA Technical Reports Server (NTRS)

    Ohsawa, Tomoki; Iwasaki, Motoya

    1990-01-01

    Experimental performance of a newly designed demodulation concept is presented. This concept applies an Adaptive Line Enhancer (ALE) to a carrier recovery circuit, which makes pull-in time significantly shorter in noisy and large carrier offset conditions. This new demodulation concept was actually developed as an INMARSAT standard-C modem, and was evaluated. On a performance evaluation, 50 symbol pull-in time is confirmed under 4 dB Eb/No condition.

  2. Technical aspects and recommendations for single-cell qPCR.

    PubMed

    Ståhlberg, Anders; Kubista, Mikael

    2018-02-01

    Single cells are basic physiological and biological units that can function individually as well as in groups in tissues and organs. It is central to identify, characterize and profile single cells at molecular level to be able to distinguish different kinds, to understand their functions and determine how they interact with each other. During the last decade several technologies for single-cell profiling have been developed and used in various applications, revealing many novel findings. Quantitative PCR (qPCR) is one of the most developed methods for single-cell profiling that can be used to interrogate several analytes, including DNA, RNA and protein. Single-cell qPCR has the potential to become routine methodology but the technique is still challenging, as it involves several experimental steps and few molecules are handled. Here, we discuss technical aspects and provide recommendation for single-cell qPCR analysis. The workflow includes experimental design, sample preparation, single-cell collection, direct lysis, reverse transcription, preamplification, qPCR and data analysis. Detailed reporting and sharing of experimental details and data will promote further development and make validation studies possible. Efforts aiming to standardize single-cell qPCR open up means to move single-cell analysis from specialized research settings to standard research laboratories. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Optimized Extraction of Polysaccharides from Grateloupia livida (Harv.) Yamada and Biological Activities.

    PubMed

    Ye, Danyan; Jiang, Zebin; Zheng, Fuchun; Wang, Hongmei; Zhang, Yanmei; Gao, Fenfei; Chen, Peihong; Chen, Yicun; Shi, Ganggang

    2015-09-16

    Polysaccharides from Grateloupia livida (Harv.) Yamada (GL) were extracted by a heating circumfluence method. Single-factor experiments were performed for the three parameters: extraction time (X₁), extraction temperature (X₂) and the ratio of water to raw material (X₃) and their test range. From preliminary experimental results, one type of the response surface methodology, the Box-Behnken design was applied for the optimizing polysaccharide extraction conditions. The experimental data obtained were fitted to a second-order polynomial equation. The optimal conditions were extraction time 5 h, extraction temperature 100 °C and ratio of water to raw material 70 mL/g. Under these conditions, the experimental yield was 39.22% ± 0.09%, which well matched the predicted value (39.25%), with 0.9774 coefficient of determination (R²). GL polysaccharides had scavenging activities for DPPH and hydroxyl radicals in vitro. The scavenging rates for both radicals peaked at 20 mg/mL GL concentration. However, the positive standard, VC (ascorbic acid), possessed stronger antioxidant activities than GL polysaccharides. Furthermore, the anticancer activity of GL polysaccharides on HepG2 cell proliferation increased dose- and time-dependently, but the positive standard, 5-fluorouracil (5-fu) showed more significant anticancer activity in this study. Overall, GL polysaccharides may have potential applications in the medical and food industries.

  4. Using experimental data to test and improve SUSY theories

    NASA Astrophysics Data System (ADS)

    Wang, Ting

    There are several pieces of evidence that our world is described by a supersymmetric extension of the Standard Model. In this thesis, I assume this is the case and study how to use experimental data to test and improve supersymmetric standard models. Several experimental signatures and their implications are covered in this thesis: the result of the branching ratio of b → sgamma is used to put constraints on SUSY models; the measured time-dependent CP asymmetry in the B → φKS process is used to test unification scale models; the excess of positrons from cosmic rays helps us to test the property of the Lightest Supersymmetric Particle and the Cold Dark Matter production mechanisms; the LEP higgs search results are used to classify SUSY models; SUSY signatures at the Tevatron are used to distinguish different unification scale models; by considering the mu problem, SUSY theories are improved. Due to the large unknown parameter space, all of the above inputs should be used to partially reconstruct the soft Lagrangian, which is the central part of the model. Combining the results from these analysis, a significant amount of knowledge about the underlying theory has been learned. In the next several years, there will be more data coming. The methods and results in this thesis will be useful for dealing with future data.

  5. Frequency standards requirements of the NASA deep space network to support outer planet missions

    NASA Technical Reports Server (NTRS)

    Fliegel, H. F.; Chao, C. C.

    1974-01-01

    Navigation of Mariner spacecraft to Jupiter and beyond will require greater accuracy of positional determination than heretofore obtained if the full experimental capabilities of this type of spacecraft are to be utilized. Advanced navigational techniques which will be available by 1977 include Very Long Baseline Interferometry (VLBI), three-way Doppler tracking (sometimes called quasi-VLBI), and two-way Doppler tracking. It is shown that VLBI and quasi-VLBI methods depend on the same basic concept, and that they impose nearly the same requirements on the stability of frequency standards at the tracking stations. It is also shown how a realistic modelling of spacecraft navigational errors prevents overspecifying the requirements to frequency stability.

  6. The Master level optics laboratory at the Institute of Optics

    NASA Astrophysics Data System (ADS)

    Adamson, Per

    2017-08-01

    The master level optics laboratory is a biannual, intensive laboratory course in the fields of geometrical, physical and modern optics. This course is intended for the master level student though Ph.D. advisors which often recommend it to their advisees. The students are required to complete five standard laboratory experiments and an independent project during a semester. The goals of the laboratory experiments are for the students to get hands-on experience setting up optical laboratory equipment, collecting and analyzing data, as well as to communicate key results. The experimental methods, analysis, and results of the standard experiments are submitted in a journal style report, while an oral presentation is given for the independent project.

  7. Progress toward a new measurement of the neutron lifetime

    NASA Astrophysics Data System (ADS)

    Grammer, Kyle

    2015-10-01

    Free neutron decay is the simplest nuclear beta decay. A precise value for the neutron lifetime is valuable for standard model consistency tests and Big Bang Nucleosynthesis models. There is a disagreement between the measured neutron lifetime from cold neutron beam experiments and ultracold neutron storage experiments. A new measurement of the neutron lifetime using the beam method is planned at the National Institute of Standards and Technology Center for Neutron Research. Experimental improvements should result in a 1s uncertainty measurement of the neutron lifetime. The technical improvements, recent apparatus tests, and the path towards the new measurement will be discussed. This work is supported by DOE Office of Science, NIST, and NSF.

  8. Progress toward a new measurement of the neutron lifetime

    NASA Astrophysics Data System (ADS)

    Grammer, Kyle

    2015-04-01

    Free neutron decay is the simplest nuclear beta decay. A precise value for the neutron lifetime is valuable for standard model consistency tests and Big Bang Nucleosynthesis models. There is a disagreement between the measured neutron lifetime from cold neutron beam experiments and ultracold neutron storage experiments. A new measurement of the neutron lifetime using the beam method is planned at the National Institute of Standards and Technology Center for Neutron Research. Experimental improvements should result in a 1s uncertainty measurement of the neutron lifetime. The technical improvements and the path towards the new measurement will be discussed. This work is supported by DOE Office of Science, NIST, and NSF.

  9. “Retention Projection” Enables Reliable Use of Shared Gas Chromatographic Retention Data Across Labs, Instruments, and Methods

    PubMed Central

    Barnes, Brian B.; Wilson, Michael B.; Carr, Peter W.; Vitha, Mark F.; Broeckling, Corey D.; Heuberger, Adam L.; Prenni, Jessica; Janis, Gregory C.; Corcoran, Henry; Snow, Nicholas H.; Chopra, Shilpi; Dhandapani, Ramkumar; Tawfall, Amanda; Sumner, Lloyd W.; Boswell, Paul G.

    2014-01-01

    Gas chromatography-mass spectrometry (GC-MS) is a primary tool used to identify compounds in complex samples. Both mass spectra and GC retention times are matched to those of standards, but it is often impractical to have standards on hand for every compound of interest, so we must rely on shared databases of MS data and GC retention information. Unfortunately, retention databases (e.g. linear retention index libraries) are experimentally restrictive, notoriously unreliable, and strongly instrument dependent, relegating GC retention information to a minor, often negligible role in compound identification despite its potential power. A new methodology called “retention projection” has great potential to overcome the limitations of shared chromatographic databases. In this work, we tested the reliability of the methodology in five independent laboratories. We found that even when each lab ran nominally the same method, the methodology was 3-fold more accurate than retention indexing because it properly accounted for unintentional differences between the GC-MS systems. When the labs used different methods of their own choosing, retention projections were 4- to 165-fold more accurate. More importantly, the distribution of error in the retention projections was predictable across different methods and labs, thus enabling automatic calculation of retention time tolerance windows. Tolerance windows at 99% confidence were generally narrower than those widely used even when physical standards are on hand to measure their retention. With its high accuracy and reliability, the new retention projection methodology makes GC retention a reliable, precise tool for compound identification, even when standards are not available to the user. PMID:24205931

  10. Analysis of exposure to electromagnetic fields in a healthcare environment: simulation and experimental study.

    PubMed

    de Miguel-Bilbao, Silvia; Martín, Miguel Angel; Del Pozo, Alejandro; Febles, Victor; Hernández, José A; de Aldecoa, José C Fernández; Ramos, Victoria

    2013-11-01

    Recent advances in wireless technologies have lead to an increase in wireless instrumentation present in healthcare centers. This paper presents an analytical method for characterizing electric field (E-field) exposure within these environments. The E-field levels of the different wireless communications systems have been measured in two floors of the Canary University Hospital Consortium (CUHC). The electromagnetic (EM) conditions detected with the experimental measures have been estimated using the software EFC-400-Telecommunications (Narda Safety Test Solutions, Sandwiesenstrasse 7, 72793 Pfullingen, Germany). The experimental and simulated results are represented through 2D contour maps, and have been compared with the recommended safety and exposure thresholds. The maximum value obtained is much lower than the 3 V m(-1) that is established in the International Electrotechnical Commission Standard of Electromedical Devices. Results show a high correlation in terms of E-field cumulative distribution function (CDF) between the experimental and simulation results. In general, the CDFs of each pair of experimental and simulated samples follow a lognormal distribution with the same mean.

  11. Simulation of one-sided heating of boiler unit membrane-type water walls

    NASA Astrophysics Data System (ADS)

    Kurepin, M. P.; Serbinovskiy, M. Yu.

    2017-03-01

    This study describes the results of simulation of the temperature field and the stress-strain state of membrane-type gastight water walls of boiler units using the finite element method. The methods of analytical and standard calculation of one-sided heating of fin-tube water walls by a radiative heat flux are analyzed. The methods and software for input data calculation in the finite-element simulation, including thermoelastic moments in welded panels that result from their one-sided heating, are proposed. The method and software modules are used for water wall simulation using ANSYS. The results of simulation of the temperature field, stress field, deformations and displacement of the membrane-type panel for the boiler furnace water wall using the finite-element method, as well as the results of calculation of the panel tube temperature, stresses and deformations using the known methods, are presented. The comparison of the known experimental results on heating and bending by given moments of membrane-type water walls and numerical simulations is performed. It is demonstrated that numerical results agree with high accuracy with the experimental data. The relative temperature difference does not exceed 1%. The relative difference of the experimental fin mutual turning angle caused by one-sided heating by radiative heat flux and the results obtained in the finite element simulation does not exceed 8.5% for nondisplaced fins and 7% for fins with displacement. The same difference for the theoretical results and the simulation using the finite-element method does not exceed 3% and 7.1%, respectively. The proposed method and software modules for simulation of the temperature field and stress-strain state of the water walls are verified and the feasibility of their application in practical design is proven.

  12. Nuclear instrumentation in VENUS-F

    NASA Astrophysics Data System (ADS)

    Wagemans, J.; Borms, L.; Kochetkov, A.; Krása, A.; Van Grieken, C.; Vittiglio, G.

    2018-01-01

    VENUS-F is a fast zero power reactor with 30 wt% U fuel and Pb/Bi as a coolant simulator. Depending on the experimental configuration, various neutron spectra (fast, epithermal, and thermal islands) are present. This paper gives a review of the nuclear instrumentation that is applied for reactor control and in a large variety of physics experiments. Activation foils and fission chambers are used to measure spatial neutron flux profiles, spectrum indices, reactivity effects (with positive period and compensation method or the MSM method) and kinetic parameters (with the Rossi-alpha method). Fission chamber calibrations are performed in the standard irradiation fields of the BR1 reactor (prompt fission neutron spectrum and Maxwellian thermal neutron spectrum).

  13. Joint Research on Scatterometry and AFM Wafer Metrology

    NASA Astrophysics Data System (ADS)

    Bodermann, Bernd; Buhr, Egbert; Danzebrink, Hans-Ulrich; Bär, Markus; Scholze, Frank; Krumrey, Michael; Wurm, Matthias; Klapetek, Petr; Hansen, Poul-Erik; Korpelainen, Virpi; van Veghel, Marijn; Yacoot, Andrew; Siitonen, Samuli; El Gawhary, Omar; Burger, Sven; Saastamoinen, Toni

    2011-11-01

    Supported by the European Commission and EURAMET, a consortium of 10 participants from national metrology institutes, universities and companies has started a joint research project with the aim of overcoming current challenges in optical scatterometry for traceable linewidth metrology. Both experimental and modelling methods will be enhanced and different methods will be compared with each other and with specially adapted atomic force microscopy (AFM) and scanning electron microscopy (SEM) measurement systems in measurement comparisons. Additionally novel methods for sophisticated data analysis will be developed and investigated to reach significant reductions of the measurement uncertainties in critical dimension (CD) metrology. One final goal will be the realisation of a wafer based reference standard material for calibration of scatterometers.

  14. Elimination of single-beam substitution error in diffuse reflectance measurements using an integrating sphere.

    PubMed

    Vidovic, Luka; Majaron, Boris

    2014-02-01

    Diffuse reflectance spectra (DRS) of biological samples are commonly measured using an integrating sphere (IS). To account for the incident light spectrum, measurement begins by placing a highly reflective white standard against the IS sample opening and collecting the reflected light. After replacing the white standard with the test sample of interest, DRS of the latter is determined as the ratio of the two values at each involved wavelength. However, such a substitution may alter the fluence rate inside the IS. This leads to distortion of measured DRS, which is known as single-beam substitution error (SBSE). Barring the use of more complex experimental setups, the literature states that only approximate corrections of the SBSE are possible, e.g., by using look-up tables generated with calibrated low-reflectivity standards. We present a practical method for elimination of SBSE when using IS equipped with an additional reference port. Two additional measurements performed at this port enable a rigorous elimination of SBSE. Our experimental characterization of SBSE is replicated by theoretical derivation. This offers an alternative possibility of computational removal of SBSE based on advance characterization of a specific DRS setup. The influence of SBSE on quantitative analysis of DRS is illustrated in one application example.

  15. Geobiochemistry of metabolism: Standard state thermodynamic properties of the citric acid cycle

    NASA Astrophysics Data System (ADS)

    Canovas, Peter A.; Shock, Everett L.

    2016-12-01

    Integrating microbial metabolism into geochemical modeling allows assessments of energy and mass transfer between the geosphere and the microbial biosphere. Energy and power supplies and demands can be assessed from analytical geochemical data given thermodynamic data for compounds involved in catabolism and anabolism. Results are reported here from a critique of the available standard state thermodynamic data for organic acids and acid anions involved in the citric acid cycle (also known as the tricarboxylic acid cycle or the Krebs cycle). The development of methods for estimating standard state data unavailable from experiments is described, together with methods to predict corresponding values at elevated temperatures and pressures using the revised Helgeson-Kirkham-Flowers (HKF) equation of state for aqueous species. Internal consistency is maintained with standard state thermodynamic data for organic and inorganic aqueous species commonly used in geochemical modeling efforts. Standard state data and revised-HKF parameters are used to predict equilibrium dissociation constants for the organic acids in the citric acid cycle, and to assess standard Gibbs energies of reactions for each step in the cycle at elevated temperatures and pressures. The results presented here can be used with analytical data from natural and experimental systems to assess the energy and power demands of microorganisms throughout the habitable ranges of pressure and temperature, and to assess the consequences of abiotic organic compound alteration processes at conditions of subsurface aquifers, sedimentary basins, hydrothermal systems, meteorite parent bodies, and ocean worlds throughout the solar system.

  16. Generating partially correlated noise--a comparison of methods.

    PubMed

    Hartmann, William M; Cho, Yun Jin

    2011-07-01

    There are three standard methods for generating two channels of partially correlated noise: the two-generator method, the three-generator method, and the symmetric-generator method. These methods allow an experimenter to specify a target cross correlation between the two channels, but actual generated noises show statistical variability around the target value. Numerical experiments were done to compare the variability for those methods as a function of the number of degrees of freedom. The results of the experiments quantify the stimulus uncertainty in diverse binaural psychoacoustical experiments: incoherence detection, perceived auditory source width, envelopment, noise localization/lateralization, and the masking level difference. The numerical experiments found that when the elemental generators have unequal powers, the different methods all have similar variability. When the powers are constrained to be equal, the symmetric-generator method has much smaller variability than the other two. © 2011 Acoustical Society of America

  17. Rigorous evaluation of chemical measurement uncertainty: liquid chromatographic analysis methods using detector response factor calibration

    NASA Astrophysics Data System (ADS)

    Toman, Blaza; Nelson, Michael A.; Bedner, Mary

    2017-06-01

    Chemical measurement methods are designed to promote accurate knowledge of a measurand or system. As such, these methods often allow elicitation of latent sources of variability and correlation in experimental data. They typically implement measurement equations that support quantification of effects associated with calibration standards and other known or observed parametric variables. Additionally, multiple samples and calibrants are usually analyzed to assess accuracy of the measurement procedure and repeatability by the analyst. Thus, a realistic assessment of uncertainty for most chemical measurement methods is not purely bottom-up (based on the measurement equation) or top-down (based on the experimental design), but inherently contains elements of both. Confidence in results must be rigorously evaluated for the sources of variability in all of the bottom-up and top-down elements. This type of analysis presents unique challenges due to various statistical correlations among the outputs of measurement equations. One approach is to use a Bayesian hierarchical (BH) model which is intrinsically rigorous, thus making it a straightforward method for use with complex experimental designs, particularly when correlations among data are numerous and difficult to elucidate or explicitly quantify. In simpler cases, careful analysis using GUM Supplement 1 (MC) methods augmented with random effects meta analysis yields similar results to a full BH model analysis. In this article we describe both approaches to rigorous uncertainty evaluation using as examples measurements of 25-hydroxyvitamin D3 in solution reference materials via liquid chromatography with UV absorbance detection (LC-UV) and liquid chromatography mass spectrometric detection using isotope dilution (LC-IDMS).

  18. Establishment of a bioassay for the toxicity evaluation and quality control of Aconitum herbs.

    PubMed

    Qin, Yi; Wang, Jia-bo; Zhao, Yan-ling; Shan, Li-mei; Li, Bao-cai; Fang, Fang; Jin, Cheng; Xiao, Xiao-he

    2012-01-15

    Currently, no bioassay is available for evaluating the toxicity of Aconitum herbs, which are well known for their lethal cardiotoxicity and neurotoxicity. In this study, we established a bioassay to evaluate the toxicity of Aconitum herbs. Test sample and standard solutions were administered to rats by intravenous infusion to determine their minimum lethal doses (MLD). Toxic potency was calculated by comparing the MLD. The experimental conditions of the method were optimized and standardized to ensure the precision and reliability of the bioassay. The application of the standardized bioassay was then tested by analyzing 18 samples of Aconitum herbs. Additionally, three major toxic alkaloids (aconitine, mesaconitine, and hypaconitine) in Aconitum herbs were analyzed using a liquid chromatographic method, which is the current method of choice for evaluating the toxicity of Aconitum herbs. We found that for all Aconitum herbs, the total toxicity of the extract was greater than the toxicity of the three alkaloids. Therefore, these three alkaloids failed to account for the total toxicity of Aconitum herbs. Compared with individual chemical analysis methods, the chief advantage of the bioassay is that it characterizes the total toxicity of Aconitum herbs. An incorrect toxicity evaluation caused by quantitative analysis of the three alkaloids might be effectively avoided by performing this bioassay. This study revealed that the bioassay is a powerful method for the safety assessment of Aconitum herbs. Copyright © 2011 Elsevier B.V. All rights reserved.

  19. Kinematics of the Normal Knee during Dynamic Activities: A Synthesis of Data from Intracortical Pins and Biplane Imaging

    PubMed Central

    Gasparutto, Xavier; Moissenet, Florent; Lafon, Yoann

    2017-01-01

    Few studies have provided in vivo tibiofemoral kinematics of the normal knee during dynamic weight-bearing activities. Indeed, gold standard measurement methods (i.e., intracortical pins and biplane imaging) raise ethical and experimental issues. Moreover, the conventions used for the processing of the kinematics show large inconsistencies. This study aims at synthesising the tibiofemoral kinematics measured with gold standard measurement methods. Published kinematic data were transformed in the standard recommended by the International Society of Biomechanics (ISB), and a clustering method was applied to investigate whether the couplings between the degrees of freedom (DoFs) are consistent among the different activities and measurement methods. The synthesised couplings between the DoFs during knee flexion (from 4° of extension to −61° of flexion) included abduction (up to −10°); internal rotation (up to 15°); and medial (up to 10 mm), anterior (up to 25 mm), and proximal (up to 28 mm) displacements. These synthesised couplings appeared mainly partitioned into two clusters that featured all the dynamic weight-bearing activities and all the measurement methods. Thus, the effect of the dynamic activities on the couplings between the tibiofemoral DoFs appeared to be limited. The synthesised data might be used as a reference of normal in vivo knee kinematics for prosthetic and orthotic design and for knee biomechanical model development and validation. PMID:28487620

  20. A New Test Unit for Disintegration End-Point Determination of Orodispersible Films.

    PubMed

    Low, Ariana; Kok, Si Ling; Khong, Yuet Mei; Chan, Sui Yung; Gokhale, Rajeev

    2015-11-01

    No standard time or pharmacopoeia disintegration test method for orodispersible films (ODFs) exists. The USP disintegration test for tablets and capsules poses significant challenges for end-point determination when used for ODFs. We tested a newly developed disintegration test unit (DTU) against the USP disintegration test. The DTU is an accessory to the USP disintegration apparatus. It holds the ODF in a horizontal position, allowing top-view of the ODF during testing. A Gauge R&R study was conducted to assign relative contributions of the total variability from the operator, sample or the experimental set-up. Precision was compared using commercial ODF products in different media. Agreement between the two measurement methods was analysed. The DTU showed improved repeatability and reproducibility compared to the USP disintegration system with tighter standard deviations regardless of operator or medium. There is good agreement between the two methods, with the USP disintegration test giving generally longer disintegration times possibly due to difficulty in end-point determination. The DTU provided clear end-point determination and is suitable for quality control of ODFs during product developmental stage or manufacturing. This may facilitate the development of a standardized methodology for disintegration time determination of ODFs. © 2015 Wiley Periodicals, Inc. and the American Pharmacists Association.

  1. Combination Studies of Oreganum Vulgare Extract Fractions and Volatile Oil along with Ciprofloxacin and Fluconazole against Common Fish Pathogens

    PubMed Central

    Bharti, Veni; Vasudeva, Neeru; Dhuhan, Joginder Singh

    2013-01-01

    Purpose: The study is aimed at finding new antibiotic therapy for aquaculture due to potential of bacteria to develop resistance to the existing therapies. Use of large quantities of synthetic antibiotics in aquaculture thus has the potential to be detrimental to fish health, to the environment and wildlife and to human health. Methods: Antimicrobial potential of volatile oil and fractions of chloroform extract of Oreganum vulgare was evaluated alone and in the presence of standard antimicrobials against common fish pathogens by disc-diffusion, agar well assay and two fold microdilution method by nanodrop spectrophotometric method. Results: The best results were represented by volatile oil followed by phenolic fraction by disc-diffusion, agar well and microdilution assays (Minimum inhibitory concentration). By the interaction studies, it was observed that the volatile oil and phenolic fraction were able to inhibit the pathogens at very low concentration compared to standard drugs. The fractional inhibitory concentration index (FICI) was calculated and volatile oil and phenolic fractions were found to be synergistic against Pseudomonas fluorescens and Candida albicans. Conclusion: The experimental data suggests the use of volatile oil and phenolic fraction in combination with standard antimicrobials to maintain healthy aquaculture with lesser adverse effects as compared to synthetic antibiotic therapy. PMID:24312842

  2. Screening of Bauhinia purpurea Linn. for analgesic and anti-inflammatory activities

    PubMed Central

    Shreedhara, C.S.; Vaidya, V.P.; Vagdevi, H.M.; Latha, K.P.; Muralikrishna, K.S.; Krupanidhi, A.M.

    2009-01-01

    Objectives: Ethanol extract of the stem of Bauhinia purpurea Linn. was subjected to analgesic and anti-inflammatory activities in animal models. Materials and Methods: Albino Wistar rats and mice were the experimental animals respectively. Different CNS depressant paradigms like analgesic activity (determined by Eddy's hot plate method and acetic acid writhing method) and anti-inflammatory activity determined by carrageenan induced paw edema using plethysmometer in albino rats) were carried out, following the intra-peritoneal administration of ethanol extract of Bauhinia purpurea Linn. (BP) at the dose level of 50 mg/kg and 100 mg/kg. Results: The analgesic and anti-inflammatory activities of ethanol extracts of BP were significant (P < 0.001). The maximum analgesic effect was observed at 120 min at the dose of 100 mg/kg (i.p.) and was comparable to that of standard analgin (150 mg/kg) and the percentage of edema inhibition effect was 46.4% and 77% for 50 mg/kg and 100 mg/kg (i.p) respectively. Anti-inflammatory activity was compared with standard Diclofenac sodium (5 mg/kg). Conclusion: Ethanol extract of Bauhinia purpurea has shown significant analgesic and anti-inflammatory activities at the dose of 100 mg/kg and was comparable with corresponding standard drugs. The activity was attributed to the presence of phytoconstituents in the tested extract. PMID:20336222

  3. Surgical hand preparation with chlorhexidine soap or povidone iodine: new methods to increase immediate and residual effectiveness, and provide a safe alternative to alcohol solutions.

    PubMed

    Herruzo, R; Vizcaino, M J; Yela, R

    2018-04-01

    Surgical use of 4% chlorhexidine soap (CHX-4) and 10% povidone iodine (PVP-I-10) does not meet the standards defined by EN 12791. To investigate the possibility of increasing the immediate and residual effects of these antiseptics. Over three consecutive weeks, n-propanol, standard CHX-4 and PVP-I-10 were tested in two experimental groups of volunteers. The new method for applying the antiseptic substances involved standard hand rub and rinse of CHX-4 or PVP-I-10, followed by application of an aqueous solution based on 5% chlorhexidine or PVP-I-10 with no further rinsing of the hands prior to donning gloves. Samples were taken to assess immediate and residual effects, analysing the logarithmic reduction of colony-forming units. At t=0 h, n-propanol was superior in bactericidal effect to standard CHX-4 (P<0.05), but the new chlorhexidine protocol was superior to both standard CHX-4 (P<0.01) and n-propanol (P<0.05); the same effect was observed at t=3 h (residual effect). At t=0 h, n-propanol was significantly superior to standard PVP-I-10, but the new PVP-I-10 protocol was superior, although not significantly, to n-propanol. There was no significant residual effect at t=3 h. The new protocol for chlorhexidine application permits surgical hand preparation with chlorhexidine, as a safe alternative to alcohol solutions, because it meets the standards defined by EN 12791. Copyright © 2017 The Healthcare Infection Society. Published by Elsevier Ltd. All rights reserved.

  4. Impact of animal waste application on runoff water quality in field experimental plots.

    PubMed

    Hill, Dagne D; Owens, William E; Tchoounwou, Paul B

    2005-08-01

    Animal waste from dairy and poultry operations is an economical and commonly used fertilizer in the state of Louisiana. The application of animal waste to pasture lands not only is a source of fertilizer, but also allows for a convenient method of waste disposal. The disposal of animal wastes on land is a potential nonpoint source of water degradation. Water degradation and human health is a major concern when considering the disposal of large quantities of animal waste. The objective of this research was to determine the effect of animal waste application on biological (fecal coliform, Enterobacter spp. and Escherichia coli) and physical/chemical (temperature, pH, nitrate nitrogen, ammonia nitrogen, phosphate, copper, zinc, and sulfate) characteristics of runoff water in experimental plots. The effects of the application of animal waste have been evaluated by utilizing experimental plots and simulated rainfall events. Samples of runoff water were collected and analyzed for fecal coliforms. Fecal coliforms isolated from these samples were identified to the species level. Chemical analysis was performed following standard test protocols. An analysis of temperature, ammonia nitrogen, nitrate nitrogen, iron, copper, phosphate, potassium, sulfate, zinc and bacterial levels was performed following standard test protocols as presented in Standard Methods for the Examination of Water and Wastewater [1]. In the experimental plots, less time was required in the tilled broiler litter plots for the measured chemicals to decrease below the initial pre-treatment levels. A decrease of over 50% was noted between the first and second rainfall events for sulfate levels. This decrease was seen after only four simulated rainfall events in tilled broiler litter plots whereas broiler litter plots required eight simulated rainfall events to show this same type of reduction. A reverse trend was seen in the broiler litter plots and the tilled broiler plots for potassium. Bacteria numbers present after the simulated rainfall events were above 200/100 ml of sample water. It can be concluded that: 1) non-point source pollution has a significant effect on bacterial and nutrients levels in runoff water and in water resources; 2) land application of animal waste for soil fertilization makes a significant contribution to water pollution; 3) the use of tilling can significantly reduce the amount of nutrients available in runoff water.

  5. Impact of Animal Waste Application on Runoff Water Quality in Field Experimental Plots

    PubMed Central

    Hill, Dagne D.; Owens, William E.; Tchounwou, Paul B.

    2005-01-01

    Animal waste from dairy and poultry operations is an economical and commonly used fertilizer in the state of Louisiana. The application of animal waste to pasture lands not only is a source of fertilizer, but also allows for a convenient method of waste disposal. The disposal of animal wastes on land is a potential nonpoint source of water degradation. Water degradation and human health is a major concern when considering the disposal of large quantities of animal waste. The objective of this research was to determine the effect of animal waste application on biological (fecal coliform, Enterobacter spp. and Escherichia coli) and physical/chemical (temperature, pH, nitrate nitrogen, ammonia nitrogen, phosphate, copper, zinc, and sulfate) characteristics of runoff water in experimental plots. The effects of the application of animal waste have been evaluated by utilizing experimental plots and simulated rainfall events. Samples of runoff water were collected and analyzed for fecal coliforms. Fecal coliforms isolated from these samples were identified to the species level. Chemical analysis was performed following standard test protocols. An analysis of temperature, ammonia nitrogen, nitrate nitrogen, iron, copper, phosphate, potassium, sulfate, zinc and bacterial levels was performed following standard test protocols as presented in Standard Methods for the Examination of Water and Wastewater [1]. In the experimental plots, less time was required in the tilled broiler litter plots for the measured chemicals to decrease below the initial pre-treatment levels. A decrease of over 50% was noted between the first and second rainfall events for sulfate levels. This decrease was seen after only four simulated rainfall events in tilled broiler litter plots whereas broiler litter plots required eight simulated rainfall events to show this same type of reduction. A reverse trend was seen in the broiler litter plots and the tilled broiler plots for potassium. Bacteria numbers present after the simulated rainfall events were above 200/100 ml of sample water. It can be concluded that: 1) non-point source pollution has a significant effect on bacterial and nutrients levels in runoff water and in water resources; 2) land application of animal waste for soil fertilization makes a significant contribution to water pollution; 3) the use of tilling can significantly reduce the amount of nutrients available in runoff water. PMID:16705834

  6. The Effect of Self-Care Education on Emotional Intelligence of Iranian Nursing Students: A Quasi-experimental Study.

    PubMed

    Goudarzian, Amir Hossein; Nesami, Masoumeh Bagheri; Sedghi, Parisa; Gholami, Mahsan; Faraji, Maryam; Hatkehlouei, Mahdi Babaei

    2018-01-20

    This study aimed to determine the effect of self-care training on emotional intelligence of nursing students. This quasi-experimental study was conducted on nursing students of Mazandaran University of Medical Sciences in 2016. The subjects (60 students) that were collected with random sampling method were divided into experimental and control groups, and then, self-care behaviors were taught to the experimental group' students in 12 sessions by using a checklist. The subjects of control group were not taught. Emotional intelligence was measured by using Bradberry and Greaves' standard questionnaire before and after the intervention. Emotional intelligence scores of students in the experimental group showed positive and significant change between before (75.33 ± 7.23) and after (125.70 ± 7.79) of training (P < 0.001). Also t test shows a significant change in control (78.73 ± 6.54) and experimental groups (125.70 ± 7.79), after of training (P < 0.001). It is recommended that special programs be organized in order to improve the emotional intelligence of students that improve the likelihood of their success in life.

  7. [Comparison of surface light scattering of acrylic intraocular lenses made by lathe-cutting and cast-molding methods--long-term observation and experimental study].

    PubMed

    Nishihara, Hitoshi; Ayaki, Masahiko; Watanabe, Tomiko; Ohnishi, Takeo; Kageyama, Toshiyuki; Yaguchi, Shigeo

    2004-03-01

    To compare the long-term clinical and experimental results of soft acrylic intraocular lenses(IOLs) manufactured by the lathe-cut(LC) method and by the cast-molding(CM) method. This was a retrospective study of 20 patients(22 eyes) who were examined in a 5- and 7-year follow-up study. Sixteen eyes were implanted with polyacrylic IOLs manufactured by the LC method and 6 eyes were implanted with polyacrylic IOLs manufactured by the CM method. Postoperative measurements included best corrected visual acuity, contrast sensitivity, biomicroscopic examination, and Scheimpflug slit-lamp images to evaluate surface light scattering. Scanning electron microscopy and three-dimensional surface analysis were conducted. At 7 years, the mean visual acuity was 1.08 +/- 0.24 (mean +/- standard deviation) in the LC group and 1.22 +/- 0.27 in the CM group. Surface light-seatter was 12.0 +/- 4.0 computer compatible tapes(CCT) in the LC group and 37.4 +/- 5.4 CCT in the CM group. Mean surface roughness was 0.70 +/- 0.07 nm in the LC group and 6.16 +/- 0.97 nm in the CM group. Acrylic IOLs manufactured by the LC method are more stable in long-termuse.

  8. Experimental and Finite Element Modeling of Near-Threshold Fatigue Crack Growth for the K-Decreasing Test Method

    NASA Technical Reports Server (NTRS)

    Smith, Stephen W.; Seshadri, Banavara R.; Newman, John A.

    2015-01-01

    The experimental methods to determine near-threshold fatigue crack growth rate data are prescribed in ASTM standard E647. To produce near-threshold data at a constant stress ratio (R), the applied stress-intensity factor (K) is decreased as the crack grows based on a specified K-gradient. Consequently, as the fatigue crack growth rate threshold is approached and the crack tip opening displacement decreases, remote crack wake contact may occur due to the plastically deformed crack wake surfaces and shield the growing crack tip resulting in a reduced crack tip driving force and non-representative crack growth rate data. If such data are used to life a component, the evaluation could yield highly non-conservative predictions. Although this anomalous behavior has been shown to be affected by K-gradient, starting K level, residual stresses, environmental assisted cracking, specimen geometry, and material type, the specifications within the standard to avoid this effect are limited to a maximum fatigue crack growth rate and a suggestion for the K-gradient value. This paper provides parallel experimental and computational simulations for the K-decreasing method for two materials (an aluminum alloy, AA 2024-T3 and a titanium alloy, Ti 6-2-2-2-2) to aid in establishing clear understanding of appropriate testing requirements. These simulations investigate the effect of K-gradient, the maximum value of stress-intensity factor applied, and material type. A material independent term is developed to guide in the selection of appropriate test conditions for most engineering alloys. With the use of such a term, near-threshold fatigue crack growth rate tests can be performed at accelerated rates, near-threshold data can be acquired in days instead of weeks without having to establish testing criteria through trial and error, and these data can be acquired for most engineering materials, even those that are produced in relatively small product forms.

  9. An Interlaboratory Evaluation of Drift Tube Ion Mobility–Mass Spectrometry Collision Cross Section Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stow, Sarah M.; Causon, Tim J.; Zheng, Xueyun

    Collision cross section (CCS) measurements resulting from ion mobility-mass spectrometry (IM-MS) experiments provide a promising orthogonal dimension of structural information in MS-based analytical separations. As with any molecular identifier, interlaboratory standardization must precede broad range integration into analytical workflows. In this study, we present a reference drift tube ion mobility mass spectrometer (DTIM-MS) where improvements on the measurement accuracy of experimental parameters influencing IM separations provide standardized drift tube, nitrogen CCS values (DTCCSN2) for over 120 unique ion species with the lowest measurement uncertainty to date. The reproducibility of these DTCCSN2 values are evaluated across three additional laboratories on amore » commercially available DTIM-MS instrument. The traditional stepped field CCS method performs with a relative standard deviation (RSD) of 0.29% for all ion species across the three additional laboratories. The calibrated single field CCS method, which is compatible with a wide range of chromatographic inlet systems, performs with an average, absolute bias of 0.54% to the standardized stepped field DTCCSN2 values on the reference system. The low RSD and biases observed in this interlaboratory study illustrate the potential of DTIM-MS for providing a molecular identifier for a broad range of discovery based analyses.« less

  10. One-calibrant kinetic calibration for on-site water sampling with solid-phase microextraction.

    PubMed

    Ouyang, Gangfeng; Cui, Shufen; Qin, Zhipei; Pawliszyn, Janusz

    2009-07-15

    The existing solid-phase microextraction (SPME) kinetic calibration technique, using the desorption of the preloaded standards to calibrate the extraction of the analytes, requires that the physicochemical properties of the standard should be similar to those of the analyte, which limited the application of the technique. In this study, a new method, termed the one-calibrant kinetic calibration technique, which can use the desorption of a single standard to calibrate all extracted analytes, was proposed. The theoretical considerations were validated by passive water sampling in laboratory and rapid water sampling in the field. To mimic the variety of the environment, such as temperature, turbulence, and the concentration of the analytes, the flow-through system for the generation of standard aqueous polycyclic aromatic hydrocarbons (PAHs) solution was modified. The experimental results of the passive samplings in the flow-through system illustrated that the effect of the environmental variables was successfully compensated with the kinetic calibration technique, and all extracted analytes can be calibrated through the desorption of a single calibrant. On-site water sampling with rotated SPME fibers also illustrated the feasibility of the new technique for rapid on-site sampling of hydrophobic organic pollutants in water. This technique will accelerate the application of the kinetic calibration method and also will be useful for other microextraction techniques.

  11. A Novel Multi-Camera Calibration Method based on Flat Refractive Geometry

    NASA Astrophysics Data System (ADS)

    Huang, S.; Feng, M. C.; Zheng, T. X.; Li, F.; Wang, J. Q.; Xiao, L. F.

    2018-03-01

    Multi-camera calibration plays an important role in many field. In the paper, we present a novel multi-camera calibration method based on flat refractive geometry. All cameras can acquire calibration images of transparent glass calibration board (TGCB) at the same time. The application of TGCB leads to refractive phenomenon which can generate calibration error. The theory of flat refractive geometry is employed to eliminate the error. The new method can solve the refractive phenomenon of TGCB. Moreover, the bundle adjustment method is used to minimize the reprojection error and obtain optimized calibration results. Finally, the four-cameras calibration results of real data show that the mean value and standard deviation of the reprojection error of our method are 4.3411e-05 and 0.4553 pixel, respectively. The experimental results show that the proposed method is accurate and reliable.

  12. Acoustic Parametric Array for Identifying Standoff Targets

    NASA Astrophysics Data System (ADS)

    Hinders, M. K.; Rudd, K. E.

    2010-02-01

    An integrated simulation method for investigating nonlinear sound beams and 3D acoustic scattering from any combination of complicated objects is presented. A standard finite-difference simulation method is used to model pulsed nonlinear sound propagation from a source to a scattering target via the KZK equation. Then, a parallel 3D acoustic simulation method based on the finite integration technique is used to model the acoustic wave interaction with the target. Any combination of objects and material layers can be placed into the 3D simulation space to study the resulting interaction. Several example simulations are presented to demonstrate the simulation method and 3D visualization techniques. The combined simulation method is validated by comparing experimental and simulation data and a demonstration of how this combined simulation method assisted in the development of a nonlinear acoustic concealed weapons detector is also presented.

  13. Effect of Simulation on Undergraduate Nursing Students' Knowledge of Nursing Ethics Principles.

    PubMed

    Donnelly, Mary Broderick; Horsley, Trisha Leann; Adams, William H; Gallagher, Peggy; Zibricky, C Dawn

    2017-12-01

    Background Undergraduate nursing education standards include acquisition of knowledge of ethics principles and the prevalence of health-care ethical dilemmas mandates that nursing students study ethics. However, little research has been published to support best practices for teaching/learning ethics principles. Purpose This study sought to determine if participation in an ethics consultation simulation increased nursing students' knowledge of nursing ethics principles compared to students who were taught ethics principles in the traditional didactic format. Methods This quasi-experimental study utilized a pre-test/post-test design with randomized assignment of students at three universities into both control and experimental groups. Results Nursing students' knowledge of nursing ethics principles significantly improved from pre-test to post-test ( p = .002); however, there was no significant difference between the experimental and control groups knowledge scores ( p = .13). Conclusion Further research into use of simulation to teach ethics principles is indicated.

  14. Experimental validation of 2D uncertainty quantification for DIC.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reu, Phillip L.

    Because digital image correlation (DIC) has become such an important and standard tool in the toolbox of experimental mechanicists, a complete uncertainty quantification of the method is needed. It should be remembered that each DIC setup and series of images will have a unique uncertainty based on the calibration quality and the image and speckle quality of the analyzed images. Any pretest work done with a calibrated DIC stereo-rig to quantify the errors using known shapes and translations, while useful, do not necessarily reveal the uncertainty of a later test. This is particularly true with high-speed applications where actual testmore » images are often less than ideal. Work has previously been completed on the mathematical underpinnings of DIC uncertainty quantification and is already published, this paper will present corresponding experimental work used to check the validity of the uncertainty equations.« less

  15. Experimental validation of 2D uncertainty quantification for digital image correlation.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reu, Phillip L.

    Because digital image correlation (DIC) has become such an important and standard tool in the toolbox of experimental mechanicists, a complete uncertainty quantification of the method is needed. It should be remembered that each DIC setup and series of images will have a unique uncertainty based on the calibration quality and the image and speckle quality of the analyzed images. Any pretest work done with a calibrated DIC stereo-rig to quantify the errors using known shapes and translations, while useful, do not necessarily reveal the uncertainty of a later test. This is particularly true with high-speed applications where actual testmore » images are often less than ideal. Work has previously been completed on the mathematical underpinnings of DIC uncertainty quantification and is already published, this paper will present corresponding experimental work used to check the validity of the uncertainty equations.« less

  16. Theoretical study of some experimentally relevant states of dysprosium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dzuba, V. A.; Flambaum, V. V.

    2010-05-15

    Configuration interaction method is used to calculate transition amplitudes and other properties of the low states of dysprosium which are used in cooling and in the study of the time variation of the fine structure constant and violation of fundamental symmetries. The branching ratio for the cooling state to decay to states other than ground states is found to be smaller than 10{sup -4}. The matrix element of the weak interaction between degenerate states at E=19797.96 cm{sup -1} is about 4 Hz which is consistent with the experimental limit |H{sub W}|=|2.3{+-}2.9(stat.){+-}0.7(syst.)| Hz [A. T. Nguyen, D. Budker, D. DeMille, andmore » M. Zolotorev, Phys. Rev. A 56, 3453 (1997)] and points to feasibility of its experimental measurement. Applications include the search for physics beyond the standard model using the parity nonconservation (PNC) isotopic chain approach.« less

  17. [Blended-learning in psychosomatics and psychotherapy - Increasing the satisfaction and knowledge of students with a web-based e-learning tool].

    PubMed

    Ferber, Julia; Schneider, Gudrun; Havlik, Linda; Heuft, Gereon; Friederichs, Hendrik; Schrewe, Franz-Bernhard; Schulz-Steinel, Andrea; Burgmer, Markus

    2014-01-01

    To improve the synergy of established methods of teaching, the Department of Psychosomatics and Psychotherapy, University Hospital Münster, developed a web-based elearning tool using video clips of standardized patients. The effect of this blended-learning approach was evaluated. A multiple-choice test was performed by a naive (without the e-learning tool) and an experimental (with the tool) cohort of medical students to test the groups' expertise in psychosomatics. In addition, participants' satisfaction with the new tool was evaluated (numeric rating scale of 0-10). The experimental cohort was more satisfied with the curriculum and more interested in psychosomatics. Furthermore, the experimental cohort scored significantly better in the multiple-choice test. The new tool proved to be an important addition to the classical curriculum as a blended-learning approach which improves students' satisfaction and knowledge in psychosomatics.

  18. Experimental demonstration of selective quantum process tomography on an NMR quantum information processor

    NASA Astrophysics Data System (ADS)

    Gaikwad, Akshay; Rehal, Diksha; Singh, Amandeep; Arvind, Dorai, Kavita

    2018-02-01

    We present the NMR implementation of a scheme for selective and efficient quantum process tomography without ancilla. We generalize this scheme such that it can be implemented efficiently using only a set of measurements involving product operators. The method allows us to estimate any element of the quantum process matrix to a desired precision, provided a set of quantum states can be prepared efficiently. Our modified technique requires fewer experimental resources as compared to the standard implementation of selective and efficient quantum process tomography, as it exploits the special nature of NMR measurements to allow us to compute specific elements of the process matrix by a restrictive set of subsystem measurements. To demonstrate the efficacy of our scheme, we experimentally tomograph the processes corresponding to "no operation," a controlled-NOT (CNOT), and a controlled-Hadamard gate on a two-qubit NMR quantum information processor, with high fidelities.

  19. Highly sensitive in-line microfluidic sensor based on microfiber-assisted Mach-Zehnder interferometer for glucose sensing

    NASA Astrophysics Data System (ADS)

    Xie, Nanjie; Zhang, Hao; Liu, Bo; Wu, Jixuan; Song, Binbin; Han, Tingting

    2017-11-01

    A highly sensitive microfluidic sensor based on a microfiber-assisted Mach-Zehnder interferometer (MAMZI) is proposed and experimentally demonstrated for the detection of low-concentration glucose solution. A segment of microfiber tapered from standard single-mode fiber (SMF) is spliced between two SMFs with pre-designed lateral offset to constitute the miniaturized MAMZI probe. The transmission spectral response to environmental refractive index variation has been experimentally investigated for glucose concentration ranges of 300 mg dL-1 to 3000 mg dL-1 and 0 to 270 mg dL-1 and the glucose concentration detection limit is 3 mg dL-1, and the experimentally observed transmission spectral responses are in accordance with our theoretical simulation results. Owing to its high sensitivity, non-enzymatic operation method, ease of fabrication and compact size, our proposed MAMZI for glucose sensing is anticipated to be employed in biomedical applications.

  20. Standard deviation and standard error of the mean.

    PubMed

    Lee, Dong Kyu; In, Junyong; Lee, Sangseok

    2015-06-01

    In most clinical and experimental studies, the standard deviation (SD) and the estimated standard error of the mean (SEM) are used to present the characteristics of sample data and to explain statistical analysis results. However, some authors occasionally muddle the distinctive usage between the SD and SEM in medical literature. Because the process of calculating the SD and SEM includes different statistical inferences, each of them has its own meaning. SD is the dispersion of data in a normal distribution. In other words, SD indicates how accurately the mean represents sample data. However the meaning of SEM includes statistical inference based on the sampling distribution. SEM is the SD of the theoretical distribution of the sample means (the sampling distribution). While either SD or SEM can be applied to describe data and statistical results, one should be aware of reasonable methods with which to use SD and SEM. We aim to elucidate the distinctions between SD and SEM and to provide proper usage guidelines for both, which summarize data and describe statistical results.

Top