Science.gov

Sample records for 10th grade benchmark

  1. Predicting 10th Grade FCAT Success. Research Brief. Volume 0401

    ERIC Educational Resources Information Center

    Froman, Terry; Bayne, Joseph

    2004-01-01

    Florida law requires that students achieve a passing score on the Grade 10 Florida Comprehensive Assessment Test (FCAT) to qualify for a standard high school diploma (Section 1008.22(3)(c)5, Florida Statutes). Students who were administered the Grade 10 FCAT for the first time during the 2002 administrations or later must earn a developmental…

  2. Indiana's Academic Standards: 10th Grade English/Language Arts.

    ERIC Educational Resources Information Center

    Indiana State Dept. of Education, Indianapolis.

    This booklet of academic standards spells out what students should know and be able to do in Grade 10 English/Language Arts. The booklet gives examples to help students understand what is required to meet the standards and provides a list of 10 things parents can do to help their child get a good education. It outlines the following seven…

  3. Changes in Math Proficiency between 8th and 10th Grades. Statistics in Brief.

    ERIC Educational Resources Information Center

    Rock, Don; And Others

    Between 8th and 10th grades, many students are asked to make curriculum-related decisions that may ultimately influence their achievement in core academic subjects such as mathematics. While past achievement often limits the level of courses available to a student, aspirations for postsecondary education ultimately determine the level of…

  4. Classroom Achievement Goal Structure, School Engagement, and Substance Use among 10th Grade Students in Norway

    ERIC Educational Resources Information Center

    Diseth, Åge; Samdal, Oddrun

    2015-01-01

    The present study was aimed at investigating the relationships between students' perceived classroom achievement goals, school engagement and substance use in terms of smoking and drinking, and at investigating gender differences regarding these issues in a sample of 1,239 Norwegian 10th grade students. A multivariate analysis showed that…

  5. Predicting 3rd Grade and 10th Grade FCAT Success for 2006-07. Research Brief. Volume 0601

    ERIC Educational Resources Information Center

    Froman, Terry; Rubiera, Vilma

    2006-01-01

    For the past few years the Florida School Code has set the Florida Comprehensive Assessment Test (FCAT) performance requirements for promotion of 3rd graders and graduation for 10th graders. Grade 3 students who do not score at level 2 or higher on the FCAT SSS Reading must be retained unless exempted for special circumstances. Grade 10 students…

  6. Tobacco use among 10th grade students in Istanbul and related variables.

    PubMed

    Evren, Cuneyt; Evren, Bilge; Bozkurt, Muge

    2014-04-01

    Aim of this study was to determine prevalence of cigarette smoking and hookah use among 10th grade students in Istanbul, Turkey, and to compare sociodemographic, psychological and behavioral variables according to frequency of tobacco use. Cross-sectional online self-report survey conducted in 45 schools from the 15 districts in Istanbul/Turkey. The questionnaire included sections about demographic data, family characteristics, school life, psychological symptoms and use of substances including tobacco, hookah, alcohol, marijuana, volatiles, heroin, cocaine, non-prescribed legal tranquillizers (benzodiazepines, alprazolam etc.) and illegal tranquillizers (flunitrazepam). The analyses were conducted based on the 4957 subjects. Trial at least once in life is observed as 45.4% for hookah use and as 24.4% for cigarette use. Risk of hookah and cigarette use was significantly higher in male students than in female students. Frequency of tobacco use is related with various sociodemographic, psychological and behavioral variables. Our data also shows that using tobacco and alcohol increases the risk of all the other substances use and these effects are interrelated. The data suggest that there is a link between tobacco use and substance use, psychological, behavioral and social factors. There is also a strong association between tobacco use and suicidal behavior as well as self-mutilative, impulsive, hyperactive, delinquent, aggressive and behavioral problems. The illumination of these relationships may be relevant in prevention and management of tobacco use as well as important problems, such as substance use, impulsivity, hyperactivity, delinquent, aggressive self-mutilative and suicidal behavior among 10th grade students in Istanbul.

  7. The Effect of Case-Based Instruction on 10th Grade Students' Understanding of Gas Concepts

    ERIC Educational Resources Information Center

    Yalçinkaya, Eylem; Boz, Yezdan

    2015-01-01

    The main purpose of the present study was to investigate the effect of case-based instruction on remedying 10th grade students' alternative conceptions related to gas concepts. 128 tenth grade students from two high schools participated in this study. In each school, one of the classes was randomly assigned as the experimental group and the…

  8. Influence of V-Diagrams on 10th Grade Turkish Students' Achievement in the Subject of Mechanical Waves

    ERIC Educational Resources Information Center

    Tekes, Hanife; Gonen, Selahattin

    2012-01-01

    The purpose of the present study was to examine how the use of V-diagrams one of the learning techniques used in laboratory studies in experiments conducted regarding the 10th grade lesson unit of "waves" influenced students' achievements. In the study, a quasi-experimental design with a pretest and posttest control group was used. The study was…

  9. Investigating the Effects of a DNA Fingerprinting Workshop on 10th Grade Students' Self Efficacy and Attitudes toward Science.

    ERIC Educational Resources Information Center

    Sonmez, Duygu; Simcox, Amanda

    The purpose of this study was investigate the effects of a DNA Fingerprinting Workshop on 10th grade students' self efficacy and attitudes toward science. The content of the workshop based on high school science curriculum and includes multimedia instruction, laboratory experiment and participation of undergraduate students as mentors. N=93…

  10. School Climate and the Relationship to Student Learning of Hispanic 10th Grade Students in Arizona Schools

    ERIC Educational Resources Information Center

    Nava Delgado, Mauricio

    2011-01-01

    This study provided an analysis of Hispanic 10th grade student academic achievement in the areas of mathematics, reading and writing as measured by the Arizona's Instrument to Measure Standards. The study is based on data of 163 school districts and 25,103 (95%) students in the state of Arizona as published by the Arizona Department of…

  11. Examining General and Specific Factors in the Dimensionality of Oral Language and Reading in 4th-10th Grades

    ERIC Educational Resources Information Center

    Foorman, Barbara R.; Koon, Sharon; Petscher, Yaacov; Mitchell, Alison; Truckenmiller, Adrea

    2015-01-01

    The objective of this study was to explore dimensions of oral language and reading and their influence on reading comprehension in a relatively understudied population--adolescent readers in 4th through 10th grades. The current study employed latent variable modeling of decoding fluency, vocabulary, syntax, and reading comprehension so as to…

  12. Progression in Complexity: Contextualizing Sustainable Marine Resources Management in a 10th Grade Classroom

    NASA Astrophysics Data System (ADS)

    Bravo-Torija, Beatriz; Jiménez-Aleixandre, María-Pilar

    2012-01-01

    Sustainable management of marine resources raises great challenges. Working with this socio-scientific issue in the classroom requires students to apply complex models about energy flow and trophic pyramids in order to understand that food chains represent transfer of energy, to construct meanings for sustainable resources management through discourse, and to connect them to actions and decisions in a real-life context. In this paper we examine the process of elaboration of plans for resources management in a marine ecosystem by 10th grade students (15-16 year) in the context of solving an authentic task. A complete class ( N = 14) worked in a sequence about ecosystems. Working in small groups, the students made models of energy flow and trophic pyramids, and used them to solve the problem of feeding a small community for a long time. Data collection included videotaping and audiotaping of all of the sessions, and collecting the students' written productions. The research objective is to examine the process of designing a plan for sustainable resources management in terms of the discursive moves of the students across stages in contextualizing practices, or different degrees of complexity (Jiménez-Aleixandre & Reigosa International Journal of Science Education, 14(1): 51-61 2006), understood as transformations from theoretical statements to decisions about the plan. The analysis of students' discursive moves shows how the groups progressed through stages of connecting different models, between them and with the context, in order to solve the task. The challenges related to taking this sustainability issue to the classroom are discussed.

  13. Human papillomavirus vaccine uptake, knowledge and attitude among 10th grade students in Berlin, Germany, 2010

    PubMed Central

    Stöcker, Petra; Dehnert, Manuel; Schuster, Melanie; Wichmann, Ole; Deleré, Yvonne

    2013-01-01

    Purpose: Since March 2007, the Standing Committee on Vaccination (STIKO) recommends HPV vaccination for all 12–17 y-old females in Germany. In the absence of an immunization register, we aimed at assessing HPV-vaccination coverage and knowledge among students in Berlin, the largest city in Germany, to identify factors influencing HPV-vaccine uptake. Results: Between September and December 2010, 442 students completed the questionnaire (mean age 15.1; range 14–19). In total 281/442 (63.6%) students specified HPV correctly as a sexually transmitted infection. Of 238 participating girls, 161 (67.6%) provided their vaccination records. Among these, 66 (41.0%) had received the recommended three HPV-vaccine doses. Reasons for being HPV-unvaccinated were reported by 65 girls: Dissuasion from parents (40.2%), dissuasion from their physician (18.5%), and concerns about side-effects (30.8%) (multiple choices possible). The odds of being vaccinated increased with age (Odds Ratio (OR) 2.19, 95% Confidence Interval (CI) 1.16, 4.15) and decreased with negative attitude toward vaccinations (OR = 0.33, 95%CI 0.13, 0.84). Methods: Self-administered questionnaires were distributed to 10th grade school students in 14 participating schools in Berlin to assess socio-demographic characteristics, knowledge, and statements on vaccinations. Vaccination records were reviewed. Multivariable statistical methods were applied to identify independent predictors for HPV-vaccine uptake among female participants. Conclusions: HPV-vaccine uptake was low among school girls in Berlin. Both, physicians and parents were influential regarding their HPV-vaccination decision even though personal perceptions played an important role as well. School programs could be beneficial to improve knowledge related to HPV and vaccines, and to offer low-barrier access to HPV vaccination. PMID:22995838

  14. The Earlier the Better? Taking the AP® in 10th Grade. Research Report No. 2012-10

    ERIC Educational Resources Information Center

    Rodriguez, Awilda; McKillip, Mary E. M.; Niu, Sunny X.

    2013-01-01

    In this report, the authors examine the impact of scoring a 1 or 2 on an AP® Exam in 10th grade on later AP Exam participation and performance. As access to AP courses increases within and across schools, a growing number of students are taking AP courses and exams in the earlier grades of high school. Using a matched sample of AP and no-AP…

  15. Changes in Educational Expectations between 10th and 12th Grades across Cohorts

    ERIC Educational Resources Information Center

    Park, Sueuk; Wells, Ryan; Bills, David

    2015-01-01

    The mean levels of educational expectations of American high school students have increased over the past generation; individual educational expectations change as students mature. Using the National Education Longitudinal Study and the Education Longitudinal Study, we examined simultaneously the changes in individuals' expectations from 10th to…

  16. Comparing Overexcitabilities of Gifted and Non-Gifted 10th Grade Students in Turkey

    ERIC Educational Resources Information Center

    Yakmaci-Guzel, Buket; Akarsu, Fusun

    2006-01-01

    The study compares overexcitability scores of Turkish 10th graders who are grouped in terms of their intellectual abilities, motivation, creativity and leadership as well as gender. 711 students who were administered Raven Advanced Progressive Matrices Test (APM) were divided into three intellectual ability categories. From this pool, 105 subjects…

  17. Evaluation of the 10th Grade Computerized Mathematics Curriculum from the Perspective of the Teachers and Educational Supervisors in the Southern Region in Jordan

    ERIC Educational Resources Information Center

    Al-Tarawneh, Sabri Hassan; Al-Qadi, Haitham Mamdouh

    2016-01-01

    This study aimed at evaluating the 10th grade computerized mathematics curriculum from the perspective of the teachers and supervisors in the southern region in Jordan. The study population consisted of all the teachers who teach the 10th grade in the southern region, with the total of (309) teachers and (20) supervisors. The sample consisted of…

  18. Growth: How Much is Too Much? Teacher's Guide. Science Module (9th-10th Grade Biology).

    ERIC Educational Resources Information Center

    Georgia Univ., Athens. Coll. of Education.

    This is a teacher's guide for a learning module designed to integrate environmental education into ninth- and tenth-grade chemistry classes. This module and a companion social studies module were pilot tested in Gwinnett County, Georgia in 1975-76. The module is divided into four parts. Part one provides a broad overview of unit content and…

  19. Water: How Good is Good Enough? Teacher's Guide. Science Module (9th-10th Grade Chemistry).

    ERIC Educational Resources Information Center

    Georgia Univ., Athens. Coll. of Education.

    This is a teacher's guide for a module designed to integrate environmental education into ninth- and tenth-grade chemistry classes. The module, pilot tested in Gwinnett County, Georgia in classes of students, many of whom had learning disabilities, emphasizes activity learning and considerable review. The module is divided into four parts. Part…

  20. Orange County Academic Decathlon for 9th and 10th Grade Students. Handbook.

    ERIC Educational Resources Information Center

    Orange County Academic Decathalon Association, CA.

    Orange County (California) students in grades 9 and 10 compete in an annually held series of 10 competitive events measuring academic strengths. These events include tests in grammar and literature, fine arts, mathematics, science, social science, study skills, and a super quiz--a team event held before a large audience. In addition, there are…

  1. Predicting 3rd Grade and 10th Grade FCAT Success for 2007-08. Research Brief. Volume 0702

    ERIC Educational Resources Information Center

    Froman, Terry; Rubiera, Vilma

    2008-01-01

    For the past few years the Florida School Code has set the Florida Comprehensive Assessment Test (FCAT) performance requirements for promotion of 3rd graders and graduation for 10 graders. Grade 3 students who do not score at level 2 or higher on the FCAT SSS Reading must be retained unless exempted for special circumstances. Grade 10 students…

  2. Examining General and Specific Factors in the Dimensionality of Oral Language and Reading in 4th–10th Grades

    PubMed Central

    Foorman, Barbara R.; Koon, Sharon; Petscher, Yaacov; Mitchell, Alison; Truckenmiller, Adrea

    2015-01-01

    The objective of this study was to explore dimensions of oral language and reading and their influence on reading comprehension in a relatively understudied population—adolescent readers in 4th through 10th grades. The current study employed latent variable modeling of decoding fluency, vocabulary, syntax, and reading comprehension so as to represent these constructs with minimal error and to examine whether residual variance unaccounted for by oral language can be captured by specific factors of syntax and vocabulary. A 1-, 3-, 4-, and bifactor model were tested with 1,792 students in 18 schools in 2 large urban districts in the Southeast. Students were individually administered measures of expressive and receptive vocabulary, syntax, and decoding fluency in mid-year. At the end of the year students took the state reading test as well as a group-administered, norm-referenced test of reading comprehension. The bifactor model fit the data best in all 7 grades and explained 72% to 99% of the variance in reading comprehension. The specific factors of syntax and vocabulary explained significant unique variance in reading comprehension in 1 grade each. The decoding fluency factor was significantly correlated with the reading comprehension and oral language factors in all grades, but, in the presence of the oral language factor, was not significantly associated with the reading comprehension factor. Results support a bifactor model of lexical knowledge rather than the 3-factor model of the Simple View of Reading, with the vast amount of variance in reading comprehension explained by a general oral language factor. PMID:26346839

  3. Energy-drink consumption and its relationship with substance use and sensation seeking among 10th grade students in Istanbul.

    PubMed

    Evren, Cuneyt; Evren, Bilge

    2015-06-01

    Aim of this study was to determine the prevalence and correlates of energy-drink (ED) consumption among 10th grade students in Istanbul/Turkey. Cross-sectional online self-report survey conducted in 45 schools from the 15 districts in Istanbul. The questionnaire included sections about demographic data, self-destructive behavior and use of substances including tobacco, alcohol and drugs. Also Psychological Screening Test for Adolescents (PSTA) was used. The analyses were conducted based on the 4957 subjects. Rate of those reported a ED consumption once within last year was 62.0% (n=3072), whereas rate of those reported ED consumption at least once in a month was 31.1%. There were consistent, statistically significant associations between genders, lifetime substance use (tobacco, alcohol and drug use), measures of sensation seeking, psychological problems (depression, anxiety, anger, impulsivity) and self-destructive behavior (self-harming behavior and suicidal thoughts) with ED consumption. In logistic regression models male gender, sensation seeking, life-time tobacco, alcohol and drug use predicted all frequencies of ED consumption. In addition to these predictors, anger and self-harming behavior also predicted ED consumption at least once in a month. There were no interactions between the associations of lifetime tobacco, alcohol and drug use with ED consumption. The findings suggest that the ED consumption of male students is related with three clusters of substances (tobacco, alcohol and drug) through sensation seeking and these relationships do not interact with each other. PMID:26006774

  4. Energy-drink consumption and its relationship with substance use and sensation seeking among 10th grade students in Istanbul.

    PubMed

    Evren, Cuneyt; Evren, Bilge

    2015-06-01

    Aim of this study was to determine the prevalence and correlates of energy-drink (ED) consumption among 10th grade students in Istanbul/Turkey. Cross-sectional online self-report survey conducted in 45 schools from the 15 districts in Istanbul. The questionnaire included sections about demographic data, self-destructive behavior and use of substances including tobacco, alcohol and drugs. Also Psychological Screening Test for Adolescents (PSTA) was used. The analyses were conducted based on the 4957 subjects. Rate of those reported a ED consumption once within last year was 62.0% (n=3072), whereas rate of those reported ED consumption at least once in a month was 31.1%. There were consistent, statistically significant associations between genders, lifetime substance use (tobacco, alcohol and drug use), measures of sensation seeking, psychological problems (depression, anxiety, anger, impulsivity) and self-destructive behavior (self-harming behavior and suicidal thoughts) with ED consumption. In logistic regression models male gender, sensation seeking, life-time tobacco, alcohol and drug use predicted all frequencies of ED consumption. In addition to these predictors, anger and self-harming behavior also predicted ED consumption at least once in a month. There were no interactions between the associations of lifetime tobacco, alcohol and drug use with ED consumption. The findings suggest that the ED consumption of male students is related with three clusters of substances (tobacco, alcohol and drug) through sensation seeking and these relationships do not interact with each other.

  5. Investigating the intrinsic and extrinsic work values of 10th grade students in science-oriented charter schools

    NASA Astrophysics Data System (ADS)

    Ozer, Ozgur

    The purpose of this study was to investigate to what extent gender, achievement level, and income level predict the intrinsic and extrinsic work values of 10th grade students. The study explored whether group differences were good predictors of scores in work values. The research was a descriptive, cross-sectional study conducted on 131 10th graders who attended science-oriented charter schools. Students took Super's Work Values Instrument, a Likert-type test that links to 15 work values, which can be categorized as intrinsic and extrinsic values (Super, 1970). Multiple regression analysis was employed as the main analysis followed by ANCOVA. Multiple regression analysis results indicated that there is evidence that 8.9% of the variance in intrinsic work values and 10.2% of the variance in extrinsic work values can be explained by the independent variables ( p < .05). Achievement Level and Income Level may help predict intrinsic work value scores; Achievement Level may also help predict extrinsic work values. Achievement Level was the covariate in ANCOVA. Results indicated that males (M = .174) in this sample have a higher mean of extrinsic work values than that of females (M = -.279). However, there was no statistically significant difference between the intrinsic work values by gender. One possible interpretation of this might be school choice; students in these science-oriented charter schools may have higher intrinsic work values regardless of gender. Results indicated that there was no statistically significant difference among the means of extrinsic work values by income level (p < .05). However, free lunch students (M = .268) have a higher mean of intrinsic work values than that of paid lunch students ( M = -.279). A possible interpretation of this might be that lower income students benefit greatly from the intrinsic work values in overcoming obstacles. Further research is needed in each of these areas. The study produced statistically significant results

  6. Perceptions of 9th and 10th Grade Students on How Their Environment, Cognition, and Behavior Motivate Them in Algebra and Geometry Courses

    ERIC Educational Resources Information Center

    Harootunian, Alen

    2012-01-01

    In this study, relationships were examined between students' perception of their cognition, behavior, environment, and motivation. The purpose of the research study was to explore the extent to which 9th and 10th grade students' perception of environment, cognition, and behavior can predict their motivation in Algebra and Geometry…

  7. Gender and Ethnic Differences in Smoking, Drinking and Illicit Drug Use among American 8th, 10th and 12th Grade Students, 1976-2000.

    ERIC Educational Resources Information Center

    Wallace, John M., Jr.; Bachman, Jerald G.; O'Malley, Patrick M.; Schulenberg, John E.; Cooper, Shauna M.; Johnston, Lloyd D.

    2003-01-01

    Paper examines ethnic differences in licit and illicit drug use among American 8th, 10th, and 12th grade students, with a particular focus on girls. Across ethnic groups, drug use is highest among Native American girls and lowest among black and Asian American girls. Trend data suggest that girls' and boys' drug use patterns are converging.…

  8. The Basic Program of Vocational Agriculture in Louisiana. Ag I and Ag II (9th and 10th Grades). Volume I. Bulletin 1690-I.

    ERIC Educational Resources Information Center

    Louisiana State Dept. of Education, Baton Rouge. Div. of Vocational Education.

    This document is the first volume of a state curriculum guide on vocational agriculture for use in the 9th and 10th grades in Louisiana. Three instructional areas are profiled in this volume: orientation to vocational agriculture, agricultural leadership, and soil science. The three units of the orientation area cover introducing beginning…

  9. The Association of Employment and Physical Activity among Black and White 10th and 12th Grade Students in the United States

    PubMed Central

    2010-01-01

    Background Evidence of an association between employment and physical activity (PA) in youth has been mixed, with studies suggesting both positive and negative associations. We examined the association between employment and PA among U.S. high school students as measured by self-reported overall PA, vigorous exercise, and participation in school athletic teams. Methods We employed a secondary analysis using weighted linear regression to a sample of black and white 10th grade (n=12073) and 12th grade students (n=5500) drawn from the nationally representative cross-sectional 2004 Monitoring the Future Study. Results Overall, 36.5% of 10th and 74.6% of 12th grade students were employed. In multivariable analyses, 10th graders working >10 hours a week reported less overall PA and exercise and those working >20 hours a week reported less participation in team sports. Among 12th graders, any level of employment was associated with lower rates of team sports; those working >10 hours a week reported less overall PA; and those working >20 hours reported less exercise. Conclusions Employment at and above 10 hours per week is negatively associated with PA. Increasing work intensity may shed light on the decline of PA as adolescents grow older and merits further attention in research. PMID:20231752

  10. Water: How Good is Good Enough? Student Book. Science Module (9th-10th Grade Chemistry). Revised Edition.

    ERIC Educational Resources Information Center

    Georgia Univ., Athens. Coll. of Education.

    This learning module is designed to integrate environmental education into ninth- and tenth-grade chemistry classes. This module and a companion social studies module were pilot tested in Gwinnett County, Georgia in classes of students, many of whom had learning disabilities. It emphasizes activity learning. The module is divided into four parts.…

  11. A Typology of Chemistry Classroom Environments: Exploring the Relationships between 10th Grade Students' Perceptions, Attitudes and Gender

    ERIC Educational Resources Information Center

    Giallousi, M.; Gialamas, V.; Pavlatou, E. A.

    2013-01-01

    The present study was the first in Greece in which educational effectiveness theory constituted a knowledge base for investigating the impact of chemistry classroom environment in 10 Grade students' enjoyment of class. An interpretive heuristic schema was developed and utilised in order to incorporate two factors of teacher behaviour at…

  12. Growth: How Much is Too Much? Student Book. Science Module (9th-10th Grade Biology). Revised Edition.

    ERIC Educational Resources Information Center

    Georgia Univ., Athens. Coll. of Education.

    This learning module is designed to integrate environmental education into ninth- and tenth-grade chemistry classes. This module and a companion social studies module were pilot tested in Gwinnett County, Georgia in 1975-76. The module is divided into four parts. Part one provides a broad overview of unit content and proposes questions to…

  13. It takes a village: the effects of 10th grade college-going expectations of students, parents, and teachers four years later.

    PubMed

    Gregory, Anne; Huang, Francis

    2013-09-01

    Adolescents are surrounded by people who have expectations about their college-going potential. Yet, few studies have examined the link between these multiple sources of college-going expectations and the actual status of students in postsecondary education years later. The study draws on data collected in the 2002-2006 Educational Longitudinal Study and employs an underutilized statistical technique (cross-classified multilevel modeling) to account for teacher reports on overlapping groups of students (typical of high school research). Results showed that positive expectations of students, parents, English, and mathematics teachers in the 10th grade each uniquely predicted postsecondary status 4 years later. As a group, the four sources of expectations explained greater variance in postsecondary education than student characteristics such as socioeconomic status and academic performance. This suggests positive expectations are additive and promotive for students regardless of their risk status. Teacher expectations were also found to be protective for low income students. Implications for future expectancy research and equity-focused interventions are discussed.

  14. Implementation and Evaluation of Web-Based Learning Activities on Bonding and the Structure of Matter for 10-th Grade Chemistry

    NASA Astrophysics Data System (ADS)

    Frailich, Marcel

    This study deals with the development, implementation, and evaluation of web-based activities associated with the topic of chemical bonding , as taught in 10th grade chemistry. A website was developed entitled: "Chemistry and the Chemical Industry in the Service of Mankind", its URL is: http://stwww.weizmann.ac.il/g-chem/learnchem (Kesner, Frailich, & Hofstein, 2003). The main goal of this study was to assess the educational effectiveness of website activities dealing with the chemical bonding concept. These activities include visualization tools, as well as topics relevant to daily life and industrial applications. The study investigated the effectiveness of a web-based learning environment regarding the understanding of chemical bonding concepts, students' perceptions of the classroom learning environment, their attitudes regarding the relevance of learning chemistry to everyday life, and their interest in chemistry studies. As mentioned before, in the present study we focused on activities (from the website), all of which deal with chemical bonding concept. The following are the reasons for the decision to focus on this topic: (1) Chemical bonding is a key concept that is taught in 10th grade chemistry in high school. It provides the basis for many other chemistry topics that are taught later, and (2) Chemical bonding is a difficult for students using existing tools (e. g., static models in books, ball-and- stick models), which are insufficient to demonstrate the abstract nature phenomena associated with this topic. The four activities developed for this study are (1) models of the atomic structure, (2) metals -- structure and properties, (3) ionic substances in everyday life and in industry, and (4) molecular substances -- structure, properties, and uses. The study analyzed both quantitative and qualitative research. The quantitative tools of the study included: A Semantic Differential questionnaire and a Chemistry Classroom Web-Based Learning Environment

  15. Power Conversion and Transmission Systems: A 9th and/or 10th Grade Industrial Education Curriculum Designed To Fulfill the Kansas State Department of Vocational Education's Level 2 Course Requirements.

    ERIC Educational Resources Information Center

    Dean, Harvey R., Ed.

    The document is a guide to a 9th and 10th grade industrial education course investigating the total system of power--how man controls, converts, transmits, and uses energy; the rationale is that if one is to learn of the total system of industry, the subsystem of power must be investigated. The guide provides a "body of knowledge" chart…

  16. Mountain Dew[R] or Mountain Don't?: A Pilot Investigation of Caffeine Use Parameters and Relations to Depression and Anxiety Symptoms in 5th- and 10th-Grade Students

    ERIC Educational Resources Information Center

    Luebbe, Aaron M.; Bell, Debora J.

    2009-01-01

    Background: Caffeine, the only licit psychoactive drug available to minors, may have a harmful impact on students' health and adjustment, yet little is known about its use or effects on students, especially from a developmental perspective. Caffeine use in 5th- and 10th-grade students was examined in a cross-sectional design, and relations and…

  17. Effect of cooperative learning strategies on student verbal interactions and achievement during conceptual change instruction in 10th grade general science

    NASA Astrophysics Data System (ADS)

    Lonning, Robert A.

    This study evaluated the effects of cooperative learning on students' verbal interaction patterns and achievement in a conceptual change instructional model in secondary science. Current conceptual change instructional models recognize the importance of student-student verbal interactions, but lack specific strategies to encourage these interactions. Cooperative learning may provide the necessary strategies. Two sections of low-ability 10th-grade students were designated the experimental and control groups. Students in both sections received identical content instruction on the particle model of matter using conceptual change teaching strategies. Students worked in teacher-assigned small groups on in-class assignments. The experimental section used cooperative learning strategies involving instruction in collaborative skills and group evaluation of assignments. The control section received no collaborative skills training and students were evaluated individually on group work. Gains on achievement were assessed using pre- and posttreatment administrations of an investigator-designed short-answer essay test. The assessment strategies used in this study represent an attempt to measure conceptual change. Achievement was related to students' ability to correctly use appropriate scientific explanations of events and phenomena and to discard use of naive conceptions. Verbal interaction patterns of students working in groups were recorded on videotape and analyzed using an investigator-designed verbal interaction scheme. The targeted verbalizations used in the interaction scheme were derived from the social learning theories of Piaget and Vygotsky. It was found that students using cooperative learning strategies showed greater achievement gains as defined above and made greater use of specific verbal patterns believed to be related to increased learning. The results of the study demonstrated that cooperative learning strategies enhance conceptual change instruction. More

  18. Trends in Substance Use among 6th-to 10th-Grade Students from 1998 to 2010: Findings from a National Probability Study

    ERIC Educational Resources Information Center

    Brooks-Russell, Ashley; Farhat, Tilda; Haynie, Denise; Simons-Morton, Bruce

    2014-01-01

    Of the handful of national studies tracking trends in adolescent substance use in the United States, only the Health Behavior in School-Aged Children (HBSC) study collects data from 6th through 10th graders. The purpose of this study was to examine trends from 1998 to 2010 (four time points) in the prevalence of tobacco, alcohol, and marijuana use…

  19. REPORT FOR COMMERCIAL GRADE NICKEL CHARACTERIZATION AND BENCHMARKING

    SciTech Connect

    2012-12-20

    Oak Ridge Associated Universities (ORAU), under the Oak Ridge Institute for Science and Education (ORISE) contract, has completed the collection, sample analysis, and review of analytical results to benchmark the concentrations of gross alpha-emitting radionuclides, gross beta-emitting radionuclides, and technetium-99 in commercial grade nickel. This report presents methods, change management, observations, and statistical analysis of materials procured from sellers representing nine countries on four continents. The data suggest there is a low probability of detecting alpha- and beta-emitting radionuclides in commercial nickel. Technetium-99 was not detected in any samples, thus suggesting it is not present in commercial nickel.

  20. The Basic Program of Vocational Agriculture in Louisiana. Ag I and Ag II (9th and 10th Grades). Volume III. Bulletin 1690-III.

    ERIC Educational Resources Information Center

    Louisiana State Dept. of Education, Baton Rouge. Div. of Vocational Education.

    This curriculum guide, the third volume of the series, outlines the basic program of vocational agriculture for Louisiana students in the ninth and tenth grades. Covered in the five units on plant science are growth processes of plants, cultural practices for plants, insects affecting plants, seed and plant selection, and diseases that affect…

  1. The Impact of Internet Virtual Physics Laboratory Instruction on the Achievement in Physics, Science Process Skills and Computer Attitudes of 10th-Grade Students

    ERIC Educational Resources Information Center

    Yang, Kun-Yuan; Heh, Jia-Sheng

    2007-01-01

    The purpose of this study was to investigate and compare the impact of Internet Virtual Physics Laboratory (IVPL) instruction with traditional laboratory instruction in physics academic achievement, performance of science process skills, and computer attitudes of tenth grade students. One-hundred and fifty students from four classes at one private…

  2. Interobserver agreement for Polyomavirus nephropathy grading in renal allografts using the working proposal from the 10th Banff Conference on Allograft Pathology.

    PubMed

    Sar, Aylin; Worawichawong, Suchin; Benediktsson, Hallgrimur; Zhang, Jianguo; Yilmaz, Serdar; Trpkov, Kiril

    2011-12-01

    A classification schema for grading Polyomavirus nephropathy was proposed at the 2009 Banff allograft meeting. The schema included 3 stages of Polyomavirus nephropathy: early (stage A), florid (stage B), and late sclerosing (stage C). Grading categories for histologic viral load levels were also proposed. To examine the applicability and the interobserver agreement of the proposed Polyomavirus nephropathy grading schema, we evaluated 24 renal allograft biopsies with confirmed Polyomavirus nephropathy by histology and SV40. Four renal pathologists independently scored the Polyomavirus nephropathy stage (A, B, or C), without knowledge of the clinical history. Viral load was scored as a percent of tubules exhibiting viral replication, using either a 3-tier viral load score (1: ≤1%; 2: >1%-10%; 3: >10%) or a 4-tier score (1: ≤1%; 2: >1%-≤5%; 3: >5%-15%; 4: >15%). The κ score for the Polyomavirus nephropathy stage was 0.47 (95% confidence interval, 0.35-0.60; P < .001). There was a substantial agreement using both the 3-tier and the 4-tier scoring for the viral load (Kendall concordance coefficients, 0.72 and 0.76, respectively; P < .001 for both). A better complete agreement was found using the 3-tier viral load score. In this first attempt to evaluate the interobserver reproducibility of the proposed Polyomavirus nephropathy classifying schema, we demonstrated moderate κ agreement in assessing the Polyomavirus nephropathy stage and a substantial agreement in scoring the viral load level. The proposed grading schema can be applied in routine allograft biopsy practice for grading the Polyomavirus nephropathy stage and the viral load level. PMID:21733554

  3. Trends in Bullying, Physical Fighting, and Weapon Carrying Among 6th- Through 10th-Grade Students From 1998 to 2010: Findings From a National Study

    PubMed Central

    Brooks-Russell, Ashley; Wang, Jing; Iannotti, Ronald J.

    2014-01-01

    Objectives. We examined trends from 1998 to 2010 in bullying, bullying victimization, physical fighting, and weapon carrying and variations by gender, grade level, and race/ethnicity among US adolescents. Methods. The Health Behavior in School-Aged Children surveys of nationally representative samples of students in grades 6 through 10 were completed in 1998 (n = 15 686), 2002 (n = 14 818), 2006 (n = 9229), and 2010 (n = 10 926). We assessed frequency of bullying behaviors, physical fighting, and weapon carrying as well as weapon type and subtypes of bullying. We conducted logistic regression analyses, accounting for the complex sampling design, to identify trends and variations by demographic factors. Results. Bullying perpetration, bullying victimization, and physical fighting declined from 1998 to 2010. Weapon carrying increased for White students only. Declines in bullying perpetration and victimization were greater for boys than for girls. Declines in bullying perpetration and physical fighting were greater for middle-school students than for high-school students. Conclusions. Declines in most violent behaviors are encouraging; however, lack of decline in weapon carrying merits further attention. PMID:24825213

  4. Rates of Substance Use of American Indian Students in 8th, 10th, and 12th Grades Living on or Near Reservations: Update, 2009–2012

    PubMed Central

    Harness, Susan D.; Swaim, Randall C.; Beauvais, Fred

    2014-01-01

    Objectives Understanding the similarities and differences between substance use rates for American Indian (AI) young people and young people nationally can better inform prevention and treatment efforts. We compared substance use rates for a large sample of AI students living on or near reservations for the years 2009–2012 with national prevalence rates from Monitoring the Future (MTF). Methods We identified and sampled schools on or near AI reservations by region; 1,399 students in sampled schools were administered the American Drug and Alcohol Survey. We computed lifetime, annual, and last-month prevalence measures by grade and compared them with MTF results for the same time period. Results Prevalence rates for AI students were significantly higher than national rates for nearly all substances, especially for 8th graders. Rates of marijuana use were very high, with lifetime use higher than 50% for all grade groups. Other findings of interest included higher binge drinking rates and OxyContin® use for AI students. Conclusions The results from this study demonstrate that adolescent substance use is still a major problem among reservation-based AI adolescent students, especially 8th graders, where prevalence rates were sometimes dramatically higher than MTF rates. Given the high rates of substance use-related problems on reservations, such as academic failure, delinquency, violent criminal behavior, suicidality, and alcohol-related mortality, the costs to members of this population and to society will continue to be much too high until a comprehensive understanding of the root causes of substance use are established. PMID:24587550

  5. Writing Instructional Intervention Supplement (Benchmarks, Informal Assessments, Strategies), Grades 4-8.

    ERIC Educational Resources Information Center

    Mississippi State Dept. of Education, Jackson.

    This Writing Instructional Intervention Supplement from the Mississippi Department of Education contains benchmarks, informal assessments, and suggested teaching strategies for the fourth through eighth grades. Benchmarks outline what students should know and be able to do to meet mandated competencies. Informal and observational assessments…

  6. Teachers' Perceptions of the Effectiveness of Benchmark Assessment Data to Predict Student Math Grades

    ERIC Educational Resources Information Center

    Lewis, Lawanna M.

    2010-01-01

    The purpose of this correlational quantitative study was to examine the extent to which teachers perceive the use of benchmark assessment data as effective; the extent to which the time spent teaching mathematics is associated with students' mathematics grades, and the extent to which the results of math benchmark assessment influence teachers'…

  7. Columbines 10th Anniversary Finds Lessons Learned

    ERIC Educational Resources Information Center

    Trump, Kenneth S.

    2009-01-01

    When school administrators hear that the 10th anniversary of the Columbine High School attack is approaching, most shake their heads in disbelief. They are amazed that 10 years have passed since this watershed event, which changed the landscape of K-12 school safety. In this article, the author reflects on the lessons learned from the Columbine…

  8. The Relationship between Mid-Year Benchmark and End-of-Grade Assessments: 2010-11. D&A Report No. 11.24

    ERIC Educational Resources Information Center

    McMillen, Brad

    2012-01-01

    This analysis was conducted to examine the relationship between Wake Count Public Schools' students' performance on mid-year benchmark assessments and End-of-Grade (EOG) tests in both reading and math in grades 3-5 in school year 2010-11. Strong positive correlations were found between the mid-year benchmark assessment and EOG test scores for each…

  9. Racial/Ethnic Differences in the Relationship Between Parental Education and Substance Use Among U.S. 8th-, 10th-, and 12th-Grade Students: Findings From the Monitoring the Future Project*

    PubMed Central

    Bachman, Jerald G.; O'Malley, Patrick M.; Johnston, Lloyd D.; Schulenberg, John E.; Wallace, John M.

    2011-01-01

    Objective: Secondary school students' rates of substance use vary significantly by race/ethnicity and by their parents' level of education (a proxy for socioeconomic status). The relationship between students' substance use and race/ethnicity is, however, potentially confounded because parental education also differs substantially by race/ethnicity. This report disentangles the confounding by examining White, African American, and Hispanic students separately, showing how parental education relates to cigarette smoking, heavy drinking, and illicit drug use. Method: Data are from the 1999-2008 Monitoring the Future nationally representative in-school surveys of more than 360,000 students in Grades 8, 10, and 12. Results: (a) High proportions of Hispanic students have parents with the lowest level of education, and the relatively low levels of substance use by these students complicates total sample data linking parental education and substance use. (b) There are clear interactions: Compared with White students, substance use rates among African American and Hispanic students are less strongly linked with parental education (and are lower overall). (c) Among White students, 8th and 1 0th graders show strong negative relations between parental education and substance use, whereas by 12th grade their heavy drinking and marijuana use are not correlated with parental education. Conclusions: Low parental education appears to be much more of a risk factor for White students than for Hispanic or African American students. Therefore, in studies of substance use epidemiology, findings based on predominantly White samples are not equally applicable to other racial/ethnic subgroups. Conversely, the large proportions of minority students in the lowest parental education category can mask or weaken findings that are clearer among White students alone. PMID:21388601

  10. Analyses of Weapons-Grade MOX VVER-1000 Neutronics Benchmarks: Pin-Cell Calculations with SCALE/SAS2H

    SciTech Connect

    Ellis, R.J.

    2001-01-11

    A series of unit pin-cell benchmark problems have been analyzed related to irradiation of mixed oxide fuel in VVER-1000s (water-water energetic reactors). One-dimensional, discrete-ordinates eigenvalue calculations of these benchmarks were performed at ORNL using the SAS2H control sequence module of the SCALE-4.3 computational code system, as part of the Fissile Materials Disposition Program (FMDP) of the US DOE. Calculations were also performed using the SCALE module CSAS to confirm the results. The 238 neutron energy group SCALE nuclear data library 238GROUPNDF5 (based on ENDF/B-V) was used for all calculations. The VVER-1000 pin-cell benchmark cases modeled with SAS2H included zero-burnup calculations for eight fuel material variants (from LEU UO{sub 2} to weapons-grade MOX) at five different reactor states, and three fuel depletion cases up to high burnup. Results of the SAS2H analyses of the VVER-1000 neutronics benchmarks are presented in this report. Good general agreement was obtained between the SAS2H results, the ORNL results using HELIOS-1.4 with ENDF/B-VI nuclear data, and the results from several Russian benchmark studies using the codes TVS-M, MCU-RFFI/A, and WIMS-ABBN. This SAS2H benchmark study is useful for the verification of HELIOS calculations, the HELIOS code being the principal computational tool at ORNL for physics studies of assembly design for weapons-grade plutonium disposition in Russian reactors.

  11. PREFACE: 10th Joint Conference on Chemistry

    NASA Astrophysics Data System (ADS)

    2016-02-01

    The 10th Joint Conference on Chemistry is an international conference organized by 4 chemistry departments of 4 universities in central Java, Indonesia. The universities are Sebelas Maret University, Diponegoro University, Semarang State University and Soedirman University. The venue was at Solo, Indonesia, at September 8-9, 2015. The total conference participants are 133 including the invited speakers. The conference emphasized the multidisciplinary chemical issue and impact of today's sustainable chemistry which covering the following topics: • Material innovation for sustainable goals • Development of renewable and sustainable energy based on chemistry • New drug design, experimental and theoretical methods • Green synthesis and characterization of material (from molecule to functionalized materials) • Catalysis as core technology in industry • Natural product isolation and optimization

  12. 10th Arnual Great Moonbuggy Race

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Students from across the United States and as far away as Puerto Rico came to Huntsville, Alabama for the 10th annual Great Moonbuggy Race at the U.S. Space Rocket Center. Sixty-eight teams, representing high schools and colleges from all over the United States, and Puerto Rico, raced human powered vehicles over a lunar-like terrain. Vehicles powered by two team members, one male and one female, raced one at a time over a half-mile obstacle course of simulated moonscape terrain. The competition is inspired by development, some 30 years ago, of the Lunar Roving Vehicle (LRV), a program managed by the Marshall Space Flight Center. The LRV team had to design a compact, lightweight, all-terrain vehicle that could be transported to the Moon in the small Apollo spacecraft. The Great Moonbuggy Race challenges students to design and build a human powered vehicle so they will learn how to deal with real-world engineering problems similar to those faced by the actual NASA LRV team. In this photograph, racers from C-1 High School in Lafayette County, Missouri, get ready to tackle the course. The team pedaled its way to victory over 29 other teams to take first place honors. It was the second year in a row a team from the school has placed first in the high school division. (NASA/MSFC)

  13. 10th Arnual Great Moonbuggy Race

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Students from across the United States and as far away as Puerto Rico came to Huntsville, Alabama for the 10th annual Great Moonbuggy Race at the U.S. Space Rocket Center. Sixty-eight teams, representing high schools and colleges from all over the United States, and Puerto Rico, raced human powered vehicles over a lunar-like terrain. Vehicles powered by two team members, one male and one female, raced one at a time over a half-mile obstacle course of simulated moonscape terrain. The competition is inspired by development, some 30 years ago, of the Lunar Roving Vehicle (LRV), a program managed by the Marshall Space Flight Center. The LRV team had to design a compact, lightweight, all-terrain vehicle that could be transported to the Moon in the small Apollo spacecraft. The Great Moonbuggy Race challenges students to design and build a human powered vehicle so they will learn how to deal with real-world engineering problems similar to those faced by the actual NASA LRV team. In this photograph, Team No. 1 from North Dakota State University in Fargo conquers one of several obstacles on their way to victory. The team captured first place honors in the college level competition.

  14. PREFACE: 10th International LISA Symposium

    NASA Astrophysics Data System (ADS)

    Ciani, Giacomo; Conklin, John W.; Mueller, Guido

    2015-05-01

    large mission in Europe, and a potential comprehensive technology development program followed by a number one selection in the 2020 Decadal Survey in the U.S. The selection of L2 was combined with the selection of L3 and the newly formed eLISA consortium submitted an updated NGO concept under the name eLISA, or Evolved LISA, to the competition. It was widely believed that the launch date of 2028 for L2, would be seen by the selection committee as providing sufficient time to retire any remaining technological risks for LISA. However, the committee selected the 'Hot and Energetic Universe', an X-ray mission, as the science theme for L2 and the 'Gravitational Universe', the eLISA science theme, for L3. Although very disappointed, it was not a surprising decision. LPF did experience further delays just prior to and during the selection process, which may have influenced the decision. The strong technology program in the U.S. never materialized because WFIRST, the highest priority large mission in the 2010 Decadal following JWST, not only moved ahead but was also up-scoped significantly. The L3 selection, the WFIRST schedule, and the missing comprehensive technology development in the U.S. will make a launch of a GW mission in the 2020s very difficult. Although many in the LISA community, including ourselves, did not want to accept this harsh reality, this was the situation just prior to the 10th LISA symposium. However, despite all of this, the LISA team is now hopeful! In May of 2014 the LISA community gathered at the University of Florida in Gainesville to discuss progress in both the science and technology of LISA. The most notable plenary and contributed sessions included updates on the progress of LISA Pathfinder, which remains on track for launch in the second half of 2015(!), the science of LISA which ranges from super-massive black hole mergers and cosmology to the study of compact binaries within our own galaxy, and updates from other programs that share some of

  15. A Chemistry Course for High Ability 8th, 9th, and 10th Graders.

    ERIC Educational Resources Information Center

    Kilker, Richard, Jr.

    1985-01-01

    Describes a chemistry course designed, in cooperation with local public school districts, to intellectually challenge a group of 8th, 9th, and 10th grade students. Organic chemistry and biochemistry are integrated into the course (titled Chemistry and Everyday Life) to emphasize practical applications of chemistry. The course syllabus is included.…

  16. Self-Perception and Achievement of Black Urban 10th Graders.

    ERIC Educational Resources Information Center

    Reglin, Gary

    Explores the following five dimensions of self-perception held by black urban male 10th-grade students in North Carolina: (1) scholastic competence; (2) athletic competence; (3) physical appearance; (4) behavioral conduct; and (5) job competence. Investigates differences in these aspects of self-concept for 30 students scoring above and 30 scoring…

  17. Program To Increase Selected 9th and 10th Graders' Career Decision-Making Skills.

    ERIC Educational Resources Information Center

    Lee, Linda D.

    This study addresses some of the career decision challenges facing 9th- and 10th-grade students. The researcher discovered that many students possessed inadequate decision-making strategies, that counselors did not focus on career planning prior to and during registration, and that the school district lacked a comprehensive career guidance…

  18. PREFACE: 10th International LISA Symposium

    NASA Astrophysics Data System (ADS)

    Ciani, Giacomo; Conklin, John W.; Mueller, Guido

    2015-05-01

    large mission in Europe, and a potential comprehensive technology development program followed by a number one selection in the 2020 Decadal Survey in the U.S. The selection of L2 was combined with the selection of L3 and the newly formed eLISA consortium submitted an updated NGO concept under the name eLISA, or Evolved LISA, to the competition. It was widely believed that the launch date of 2028 for L2, would be seen by the selection committee as providing sufficient time to retire any remaining technological risks for LISA. However, the committee selected the 'Hot and Energetic Universe', an X-ray mission, as the science theme for L2 and the 'Gravitational Universe', the eLISA science theme, for L3. Although very disappointed, it was not a surprising decision. LPF did experience further delays just prior to and during the selection process, which may have influenced the decision. The strong technology program in the U.S. never materialized because WFIRST, the highest priority large mission in the 2010 Decadal following JWST, not only moved ahead but was also up-scoped significantly. The L3 selection, the WFIRST schedule, and the missing comprehensive technology development in the U.S. will make a launch of a GW mission in the 2020s very difficult. Although many in the LISA community, including ourselves, did not want to accept this harsh reality, this was the situation just prior to the 10th LISA symposium. However, despite all of this, the LISA team is now hopeful! In May of 2014 the LISA community gathered at the University of Florida in Gainesville to discuss progress in both the science and technology of LISA. The most notable plenary and contributed sessions included updates on the progress of LISA Pathfinder, which remains on track for launch in the second half of 2015(!), the science of LISA which ranges from super-massive black hole mergers and cosmology to the study of compact binaries within our own galaxy, and updates from other programs that share some of

  19. EDITORIAL: 10th anniversary of attosecond pulses 10th anniversary of attosecond pulses

    NASA Astrophysics Data System (ADS)

    Kienberger, Reinhard; Chang, Zenghu; Nam, Chang Hee

    2012-04-01

    times in atoms and molecules, such as Auger decay time and autoionization lifetime, have been measured directly as compared to indirect spectroscopic measurements normally done using synchrotron light sources. The reconstruction of molecular orbital wave functions has been demonstrated by developing the molecular tomography method. Ultrafast phenomena in condensed matter and in nanostructures have been tackled also. The successful development of attosecond light sources has thus opened up a variety of new research activities in ultrafast optical science; it will be continued and accelerated further in coming years with intensive research investments by more groups joining the field of attosecond science. In this special issue celebrating the 10th year of attosecond pulse generation 6 review articles and 16 regular articles are included. Although it does not cover all active research areas, we sincerely hope it gives a glimpse of active research activities in attosecond science throughout the world.

  20. Nineth Rib Syndrome after 10th Rib Resection

    PubMed Central

    Yu, Hyun Jeong; Jeong, Yu Sub; Lee, Dong Hoon

    2016-01-01

    The 12th rib syndrome is a disease that causes pain between the upper abdomen and the lower chest. It is assumed that the impinging on the nerves between the ribs causes pain in the lower chest, upper abdomen, and flank. A 74-year-old female patient visited a pain clinic complaining of pain in her back, and left chest wall at a 7 on the 0-10 Numeric Rating scale (NRS). She had a lateral fixation at T12-L2, 6 years earlier. After the operation, she had multiple osteoporotic compression fractures. When the spine was bent, the patient complained about a sharp pain in the left mid-axillary line and radiating pain toward the abdomen. On physical examination, the 10th rib was not felt, and an image of the rib-cage confirmed that the left 10th rib was severed. When applying pressure from the legs to the 9th rib of the patient, pain was reproduced. Therefore, the patient was diagnosed with 9th rib syndrome, and ultrasound-guided 9th and 10th intercostal nerve blocks were performed around the tips of the severed 10th rib. In addition, local anesthetics with triamcinolone were administered into the muscles beneath the 9th rib at the point of the greatest tenderness. The patient's pain was reduced to NRS 2 point. In this case, it is suspected that the patient had a partial resection of the left 10th rib in the past, and subsequent compression fractures at T8 and T9 led to the deformation of the rib cage, causing the tip of the remaining 10th rib to impinge on the 9th intercostal nerves, causing pain. PMID:27413484

  1. Beyond Discipline: From Compliance to Community. 10th Anniversary Edition

    ERIC Educational Resources Information Center

    Kohn, Alfie

    2006-01-01

    In this 10th anniversary edition of an Association for Supervision and Curriculum Development (ASCD) best seller, the author reflects on his revolutionary ideas in the context of today's emphasis on school accountability and high-stakes testing. The author relates how his innovative approach--where teachers learn to work with students, rather than…

  2. Factors Related to Alcohol Use among 6th through 10th Graders: The Sarasota County Demonstration Project

    ERIC Educational Resources Information Center

    Eaton, Danice K.; Forthofer, Melinda S.; Zapata, Lauren B.; Brown, Kelli R. McCormack; Bryant, Carol A.; Reynolds, Sherri T.; McDermott, Robert J.

    2004-01-01

    Alcohol consumption by youth can produce negative health outcomes. This study identified correlates of lifetime alcohol use, recent alcohol use, and binge drinking among youth in sixth through 10th grade (n = 2,004) in Sarasota County, Fla. Results from a closed-ended, quantitative survey acknowledged a range of personal, social and environmental…

  3. How Many Letters Should Preschoolers in Public Programs Know? The Diagnostic Efficiency of Various Preschool Letter-Naming Benchmarks for Predicting First-Grade Literacy Achievement

    PubMed Central

    Piasta, Shayne B.; Petscher, Yaacov; Justice, Laura M.

    2015-01-01

    Review of current federal and state standards indicates little consensus or empirical justification regarding appropriate goals, often referred to as benchmarks, for preschool letter-name learning. The present study investigated the diagnostic efficiency of various letter-naming benchmarks using a longitudinal database of 371 children who attended publicly funded preschools. Children’s uppercase and lowercase letter-naming abilities were assessed at the end of preschool, and their literacy achievement on 3 standardized measures was assessed at the end of 1st grade. Diagnostic indices (sensitivity, specificity, and negative and positive predictive power) were generated to examine the extent to which attainment of various preschool letter-naming benchmarks was associated with later risk for literacy difficulties. Results indicated generally high negative predictive power for benchmarks requiring children to know 10 or more letter names by the end of preschool. Balancing across all diagnostic indices, optimal benchmarks of 18 uppercase and 15 lowercase letter names were identified. These findings are discussed in terms of educational implications, limitations, and future directions. PMID:26346643

  4. [10th case of lobomycosis observed in French Guiana].

    PubMed

    Roche, J C; Monod, L

    1976-01-01

    Lobomycosis a disease specific to the South-American continent; it is rare but not exceptional, since 11 cases have already been observed in French Guyana. A propos of the 10th case, the authors recall the circumstances of the discovery and the basic elements of the microscopical diagnosis. The actual progress in the in vivo culture techniques should allow a better knowledge of the pathogen agent in the future.

  5. meeting summary 10th AMS Symposium on Education.

    NASA Astrophysics Data System (ADS)

    Smith, D. R.; Hayes, M. C.; Ramamurthy, M. K.; Zeitler, J. W.; Murphy, K. A.; Croft, P. J.; Nese, J. M.; Friedman, H. A.; Robinson, H. W.; Thormeyer, C. D.; Ruscher, P. A.; Pandya, R. E.

    2001-12-01

    The American Meteorological Society held its 10th Symposium on Education in conjunction with the 82nd Annual Meeting in Albuquerque, New Mexico. The theme of 2001's symposium was enhancing public awareness of the atmospheric and oceanic environments. Thirty-six oral presentations and 38 poster presentations summarized a variety of educational programs or examined educational issues at both the precollege and university levels. There was a special session on increasing awareness of meteorology and oceanography through popular and informal educational activities, as well as a joint session with the 17th International Conference on Interactive Information and Processing Systems (IIPS) for Meteorology, Oceanography, and Hydrology on using the World Wide Web to deliver information pertaining to the atmosphere, oceans, and coastal zone. Over 200 people representing a wide spectrum of the Society attended one or more of the sessions in this 2-day conference. The program for the 10th Symposium on Education can be viewed in the November 2000 issue of the Bulletin.

  6. Byzantine psychosomatic medicine (10th- 15th century).

    PubMed

    Eftychiadis, A C

    1999-01-01

    Original elements of the psychosomatic medicine are examined by the most important byzantine physicians and medico-philosophers during the 10th -15th centuries. These topics concern the psycosomatic unity of the human personality, the psychosomatic disturbances, diseases and interactions, organic diseases, which cause psychical disorders, psychical pathological reactions, which result in somatic diseases, the psychology of the depth of the soul, the psychosomatic pathogenetic reasons of psychiatric and neurological diseases and suicide, the influence of witchcraft on psychosomatic affections, maniac and demoniac patients. The psychosomatic treatment has a holistic preventive and curative character and encloses sanitary and dietary measures, physiotherapy, curative bathing, strong purgation, pharmaceutical preparations proportional to the disease, religious disposition, psychoanalysis and psychotherapy with dialogue and the contribution of the divine factor. The late byzantine medical science contributed mainly to the progress of the psychosomatic medicine and therapeutics. The saint woman physician Hermione (1st -2nd cent.) is considered as the protectress of psychosomatic medicine. PMID:11624574

  7. The Comparative Toxicogenomics Database's 10th year anniversary: update 2015

    PubMed Central

    Davis, Allan Peter; Grondin, Cynthia J.; Lennon-Hopkins, Kelley; Saraceni-Richards, Cynthia; Sciaky, Daniela; King, Benjamin L.; Wiegers, Thomas C.; Mattingly, Carolyn J.

    2015-01-01

    Ten years ago, the Comparative Toxicogenomics Database (CTD; http://ctdbase.org/) was developed out of a need to formalize, harmonize and centralize the information on numerous genes and proteins responding to environmental toxic agents across diverse species. CTD's initial approach was to facilitate comparisons of nucleotide and protein sequences of toxicologically significant genes by curating these sequences and electronically annotating them with chemical terms from their associated references. Since then, however, CTD has vastly expanded its scope to robustly represent a triad of chemical–gene, chemical–disease and gene–disease interactions that are manually curated from the scientific literature by professional biocurators using controlled vocabularies, ontologies and structured notation. Today, CTD includes 24 million toxicogenomic connections relating chemicals/drugs, genes/proteins, diseases, taxa, phenotypes, Gene Ontology annotations, pathways and interaction modules. In this 10th year anniversary update, we outline the evolution of CTD, including our increased data content, new ‘Pathway View’ visualization tool, enhanced curation practices, pilot chemical–phenotype results and impending exposure data set. The prototype database originally described in our first report has transformed into a sophisticated resource used actively today to help scientists develop and test hypotheses about the etiologies of environmentally influenced diseases. PMID:25326323

  8. Supracostal Approach for PCNL: Is 10th and 11th Intercostal Space Safe According to Clavien Classification System?

    PubMed Central

    Kara, Cengiz; Değirmenci, Tansu; Kozacioglu, Zafer; Gunlusoy, Bulent; Koras, Omer; Minareci, Suleyman

    2014-01-01

    The purpose of this study was to evaluate the success and morbidity of percutaneous nephrolithotomy (PCNL) performed through the 11th and 10th intercostal space. Between March 2005 and February 2012, 612 patients underwent PCNL, 243 of whom had a supracostal access. The interspace between the 11th and 12th rib was used in 204 cases (group 1) and between the 10th and 11th interspaces in 39 cases (group 2). PCNL was performed using standard supracostal technique in all patients. The operative time, success rate, hospital stay, and complications according to the modified Clavien classification were compared between group 1 and group 2. The stone-free rate was 86.8% in group 1 and 84.6% in group 2 after one session of PCNL. Auxiliary procedures consisting of ureterorenoscopy (URS) and shock wave lithotripsy (SWL) were required in 5 and 7 patients, respectively, in group 1; and in 1 patient each in group 2 . After the auxiliary procedures, stone-free rates increased to 92.6% in group 1 and 89.7% in group 2. A total of 74 (30.4%) complications were documented in the 2 groups according to modified Clavien classification. Grade-I complications were recorded in 20 (8.2%), grade-II in 38 (15.6%), grade-IIIa in 13 (5.3%), and grade-IIIb in 2 (0.8%) patients; grade-IVa was recorded in 1 (0.4%) patient. There were no grade-IVb or grade-V complications. Overall complication rate was 30.9% in group 1 and 28.2% in group 2. Supracostal PCNL in selected cases is effective and safe with acceptable complications. The modified Clavien system provides a standardized grading system for complications of PCNL. PMID:25437600

  9. PREFACE: ISEC 2005: The 10th International Superconductive Electronics Conference

    NASA Astrophysics Data System (ADS)

    Rogalla, Horst

    2006-05-01

    The 10th International Superconductive Electronics Conference took place in Noordwijkerhout in the Netherlands, 5-9 September 2005, not far from the birthplace of superconductivity in Leiden nearly 100 years ago. There have been many reasons to celebrate the 10th ISEC: not only was it the 20th anniversary, but also the achievements since the first conference in Tokyo in 1987 are tremendous. We have seen whole new groups of superconductive materials come into play, such as oxide superconductors with maximum Tc in excess of 100 K, carbon nanotubes, as well as the realization of new digital concepts from saturation logic to the ultra-fast RSFQ-logic. We have learned that superconductors not only show s-wave symmetries in the spatial arrangement of the order parameter, but also that d-wave dependence in oxide superconductors is now well accepted and can even be successfully applied to digital circuits. We are now used to operating SQUIDs in liquid nitrogen; fT sensitivity of SQUID magnetometers is not surprising anymore and can even be reached with oxide-superconductor based SQUIDs. Even frequency discriminating wide-band single photon detection with superconductive devices, and Josephson voltage standards with tens of thousands of junctions, nowadays belong to the daily life of advanced laboratories. ISEC has played a very important role in this development. The first conferences were held in 1987 and 1989 in Tokyo, and subsequently took place in Glasgow (UK), Boulder (USA), Nagoya (Japan), Berlin (Germany), Berkeley (USA), Osaka (Japan), Sydney (Australia), and in 2005 for the first time in the Netherlands. These conferences have provided platforms for the presentation of the research and development results of this community and for the vivid discussion of achievements and strategies for the further development of superconductive electronics. The 10th conference has played a very important role in this context. The results in laboratories show great potential and

  10. Benchmark characterization

    NASA Technical Reports Server (NTRS)

    Conte, Thomas M.; Hwu, Wen-Mei W.

    1991-01-01

    An abstract system of benchmark characteristics that makes it possible, in the beginning of the design stage, to design with benchmark performance in mind is presented. The benchmark characteristics for a set of commonly used benchmarks are then shown. The benchmark set used includes some benchmarks from the Systems Performance Evaluation Cooperative (SPEC). The SPEC programs are industry-standard applications that use specific inputs. Processor, memory-system, and operating-system characteristics are addressed.

  11. Interests of 5th through 10th Grade Students Regarding Enviromental Protection Issues

    ERIC Educational Resources Information Center

    Erten, Sinan

    2015-01-01

    This study investigates the extent of interest among middle and high school students in environmental protection issues along with the sources of their interests and factors that impact their interests, namely people with whom they interact and courses that they take related to the environment, science and technology. In addition, it is confirmed…

  12. Attitudes towards Science Learning among 10th-Grade Students: A Qualitative Look

    ERIC Educational Resources Information Center

    Raved, Lena; Assaraf, Orit Ben Zvi

    2011-01-01

    The twenty-first century is characterized by multiple, frequent and remarkable scientific advancements, which have a major effect on the decisions that govern everyday life. It is therefore vital to give proper comprehensive scientific education to the population and provide it with the right tools for decision-making. This in turn requires that…

  13. Interests of 5th through 10th Grade Students toward Human Biology

    ERIC Educational Resources Information Center

    Erten, Sinan

    2008-01-01

    This study investigated the middle and high school students' interests towards the subjects of human biology, specifically, "Human Health and Nutrition" and "Human Body and Organs." The study also investigated sources of their interests and factors that impact their interests, namely people that they interact and courses that they take about…

  14. Food Services and Hospitality for 10th, 11th, and 12th Grades. Course Outline.

    ERIC Educational Resources Information Center

    Bucks County Technical School, Fairless Hills, PA.

    The outline describes the food services and hospitality course offered to senior high school students at the Bucks County Technical School. Specifically, the course seeks to provide students with a workable knowledge of food services and foster in them a sense of personal pride for quality workmanship. In addition to a statement of the philosophy…

  15. School Composition and Context Factors that Moderate and Predict 10th-Grade Science Proficiency

    ERIC Educational Resources Information Center

    Hogrebe, Mark C.; Tate, William F., IV

    2010-01-01

    Background: Performance in high school science is a critical indicator of science literacy and regional competitiveness. Factors that influence science proficiency have been studied using national databases, but these do not answer all questions about variable relationships at the state level. School context factors and opportunities to learn…

  16. Does STES-Oriented Science Education Promote 10th-Grade Students' Decision-Making Capability?

    ERIC Educational Resources Information Center

    Levy Nahum, Tami; Ben-Chaim, David; Azaiza, Ibtesam; Herskovitz, Orit; Zoller, Uri

    2010-01-01

    Today's society is continuously coping with sustainability-related complex issues in the Science-Technology-Environment-Society (STES) interfaces. In those contexts, the need and relevance of the development of students' higher-order cognitive skills (HOCS) such as question-asking, critical-thinking, problem-solving and decision-making…

  17. Using Diagrams versus Text for Spaced Restudy: Effects on Learning in 10th Grade Biology Classes

    ERIC Educational Resources Information Center

    Bergey, Bradley W.; Cromley, Jennifer G.; Kirchgessner, Mandy L.; Newcombe, Nora S.

    2015-01-01

    Background and Aim: Spaced restudy has been typically tested with written learning materials, but restudy with visual representations in actual classrooms is under-researched. We compared the effects of two spaced restudy interventions: A Diagram-Based Restudy (DBR) warm-up condition and a business-as-usual Text-Based Restudy (TBR) warm-up…

  18. General Shop Competencies in Vocational Agriculture for 9th and 10th Grade Classes.

    ERIC Educational Resources Information Center

    Novotny, Ronald; And Others

    The document presents unit plans which offer list of experiences and competencies to be learned for general shop occupations in vocational agriculture. The units include: (1) arc welding, (2) oxy-acetylene welding, (3) flat concrete, (4) concrete block, (5) lumber patterns and wood building materials, (6) metal fasteners, (7) wood adhesives, (8)…

  19. Progression in Complexity: Contextualizing Sustainable Marine Resources Management in a 10th Grade Classroom

    ERIC Educational Resources Information Center

    Bravo-Torija, Beatriz; Jimenez-Aleixandre, Maria-Pilar

    2012-01-01

    Sustainable management of marine resources raises great challenges. Working with this socio-scientific issue in the classroom requires students to apply complex models about energy flow and trophic pyramids in order to understand that food chains represent transfer of energy, to construct meanings for sustainable resources management through…

  20. 1. Historic American Buildings Survey Joseph Hill, Photographer August 10th, ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    1. Historic American Buildings Survey Joseph Hill, Photographer August 10th, 1936 (Copied from small photo taken by survey members) OLD APARTMENT HOUSE - Jansonist Colony, Old Apartment House, Main Street, Bishop Hill, Henry County, IL

  1. 16. NORTHEAST CORNER VIEW OF 10TH AND 11TH FLOOR WINDOWS. ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    16. NORTHEAST CORNER VIEW OF 10TH AND 11TH FLOOR WINDOWS. CORNER SHOWS THE DIAGONALLY FLUTED SPIRAL DESIGN OF THE RELIEF COLUMN. - Pacific Telephone & Telegraph Company Building, 1519 Franklin Street, Oakland, Alameda County, CA

  2. How Many Letters Should Preschoolers in Public Programs Know? The Diagnostic Efficiency of Various Preschool Letter-Naming Benchmarks for Predicting First-Grade Literacy Achievement

    ERIC Educational Resources Information Center

    Piasta, Shayne B.; Petscher, Yaacov; Justice, Laura M.

    2012-01-01

    Review of current federal and state standards indicates little consensus or empirical justification regarding appropriate goals, often referred to as benchmarks, for preschool letter-name learning. The present study investigated the diagnostic efficiency of various letter-naming benchmarks using a longitudinal database of 371 children who attended…

  3. Predictors of Student Performance in Grades 7 and 8 Mathematics: The Correlation between Benchmark Tests and Performance on the Texas Assessment of Knowledge and Skills (TAKS) Math Tests

    ERIC Educational Resources Information Center

    Allen, Timothy Dale

    2012-01-01

    School districts throughout Texas have used archived Texas Assessment of Knowledge and Skills (TAKS) tests as a benchmark to predict student performance on future TAKS tests without substantial quantitative evidence that these types of benchmark tests are valid predictors of student performance. The purpose of this quantitative correlational study…

  4. Alberta K-12 ESL Proficiency Benchmarks

    ERIC Educational Resources Information Center

    Salmon, Kathy; Ettrich, Mike

    2012-01-01

    The Alberta K-12 ESL Proficiency Benchmarks are organized by division: kindergarten, grades 1-3, grades 4-6, grades 7-9, and grades 10-12. They are descriptors of language proficiency in listening, speaking, reading, and writing. The descriptors are arranged in a continuum of seven language competences across five proficiency levels. Several…

  5. Limitations of Community College Benchmarking and Benchmarks

    ERIC Educational Resources Information Center

    Bers, Trudy H.

    2006-01-01

    This chapter distinguishes between benchmarks and benchmarking, describes a number of data and cultural limitations to benchmarking projects, and suggests that external demands for accountability are the dominant reason for growing interest in benchmarking among community colleges.

  6. State Education & Environment Roundtable (SEER) Seminar (10th, Annapolis, Maryland, December 3-7, 2000).

    ERIC Educational Resources Information Center

    Lieberman, Gerald A.; Hoody, Linda L.

    This document reports on the 10th seminar of the State Education and Environment Roundtable (SEER). It consists of brief overviews of the daily discussions and presentations that were made at the seminar. Topics discussed include measuring success through student assessment, the Bay Schools Project (BSP), and a co-sponsored educational forum with…

  7. Mental Retardation: Definition, Classification, and Systems of Supports. 10th Edition.

    ERIC Educational Resources Information Center

    Luckasson, Ruth; Borthwick-Duffy, Sharon; Buntinx, Wil H. E.; Coulter, David L.; Craig, Ellis M.; Reeve, Alya; Schalock, Robert L.; Snell, Martha E.; Spitalnik, Deborah M.; Spreat, Scott; Tasse, Marc J.

    This manual, the 10th edition of a regularly published definition and classification work on mental retardation, presents five key assumptions upon which the definition of mental retardation is based and a theoretical model of five essential dimensions that explain mental retardation and how to use the companion system. These dimensions include…

  8. County Data Book, 2000: Kentucky Kids Count. 10th Annual Edition.

    ERIC Educational Resources Information Center

    Albright, Danielle; Hall, Douglas; Mellick, Donna; Miller, Debra; Town, Jackie

    This 10th annual Kids Count data book reports on trends in the well-being of Kentucky's children. The statistical portrait is based on indicators in the areas of well being, child risk factors, and demography. The indicators are as follows: (1) healthy births, including birth weights and prenatal care; (2) maternal risk characteristics, including…

  9. Making a difference: education at the 10th International Conference on Zebrafish Development and Genetics.

    PubMed

    Hutson, Lara D; Liang, Jennifer O; Pickart, Michael A; Pierret, Chris; Tomasciewicz, Henry G

    2012-12-01

    Scientists, educators, and students met at the 10th International Conference on Zebrafish Development and Genetics during the 2-day Education Workshop, chaired by Dr. Jennifer Liang and supported in part by the Genetics Society of America. The goal of the workshop was to share expertise, to discuss the challenges faced when using zebrafish in the classroom, and to articulate goals for expanding the impact of zebrafish in education.

  10. Making a Difference: Education at the 10th International Conference on Zebrafish Development and Genetics

    PubMed Central

    Liang, Jennifer O.; Pickart, Michael A.; Pierret, Chris; Tomasciewicz, Henry G.

    2012-01-01

    Abstract Scientists, educators, and students met at the 10th International Conference on Zebrafish Development and Genetics during the 2-day Education Workshop, chaired by Dr. Jennifer Liang and supported in part by the Genetics Society of America. The goal of the workshop was to share expertise, to discuss the challenges faced when using zebrafish in the classroom, and to articulate goals for expanding the impact of zebrafish in education. PMID:23244686

  11. From the corner of N. 10th St. and W. O'Neill ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    From the corner of N. 10th St. and W. O'Neill Ave. Looking west. Housing # 157-162 are on the right, building 156 is straight ahead, and buildings 153, 152, 116, and 115 are to the left. The golf course is directly west of these buildings. - Fitzsimons General Hospital, Bounded by East Colfax to south, Peoria Street to west, Denver City/County & Adams County Line to north, & U.S. Route 255 to east, Aurora, Adams County, CO

  12. [Infanticide by throwing the child from the 10th floor of a building].

    PubMed

    Schröder, Ann Sophie; Görndt, Jennifer; Püschel, Klaus

    2009-01-01

    Childbirth after denial or concealment of pregnancy has an increased risk of mortality for both mother and child. Interdisciplinary cooperation between professional groups is needed to explore the psychological and criminological aspects of infanticide. The case of a primipara mother, who threw her mature and viable newborn from the 10th floor of a high-rise building shortly after a concealed pregnancy, is reported. Forensic medical issues, as well as the characteristics of the offence and the perpetrator, are described.

  13. 14. CLOSEUP VIEW OF THE 10TH AND 11TH FLOOR WINDOWS. ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    14. CLOSE-UP VIEW OF THE 10TH AND 11TH FLOOR WINDOWS. WINDOWS HAVE WHITE TERRA COTTA SILLS, HEADS AND MULLIONS. ARCHES ARE OF TERRA COTTA INCLUDING ORNAMENTATION ABOVE THE 11TH FLOOR WINDOWS. CIRCULAR ORNAMENTATIONS BETWEEN ARCHES ARE TERRA COTTA PAINTED IN BRONZE COLOR. LOUVERS ON THE WINDOWS ARE NOT PART OF THE ORIGINAL DESIGN. THIS IS THE FRONT ELEVATION. - Pacific Telephone & Telegraph Company Building, 1519 Franklin Street, Oakland, Alameda County, CA

  14. Final Report 10th Conference on the Intersections of Particle and Nuclear Physics

    SciTech Connect

    Marshak, Marvin L.

    2013-11-03

    The 10th Conference on the Intersections of Particle and Nuclear Physics was held in LaJolla, California on May 26 to May 31, 2009. The Conference Proceedings are published by the American Institute of Physics in Volume 1182 of the AIP Conference Proceedings (ISBN: 978-0-7354-0723-7). The Proceedings include papers from each of the Conference Presenters and a detailed schedule of talks at the Conference. The Table of Contents of the Conference Proceedings is available at http://scitation.aip.org/content/aip/proceeding/aipcp/1182. Support by the U.S. Department of Energy and by DOE Laboratories was essential to the success of the Conference.

  15. Characterisation of decorations on Iranian (10th-13th century) lustreware

    NASA Astrophysics Data System (ADS)

    Borgia, I.; Brunetti, B.; Giulivi, A.; Sgamellotti, A.; Shokouhi, F.; Oliaiy, P.; Rahighi, J.; Lamehi-Rachti, M.; Mellini, M.; Viti, C.

    It has been recently shown that lustre decoration of Medieval and Renaissance pottery consists of silver and copper nanoparticles, dispersed within the glassy matrix of the ceramic glaze. Lustre surfaces show peculiar optical effects, such as metallic reflection and iridescence. Here we report the findings of a study on lustred glazes of several shards belonging to Iranian pottery of the 10th and 13th centuries, decorated on both sides. Two different glazes, depending on the side of the sample, have been identified. Different lustre chromatic effects are characterised by the relative presence of silver- and copper-metal nanoparticles dispersed in the glassy matrix.

  16. From the corner of E. Mccloskey Ave. and N. 10th ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    From the corner of E. Mccloskey Ave. and N. 10th St., looking west with building 135 (gas station) on the left. Beyond it is building 119 and to the right of 119 is the gable end of the north side of 120. Beyond and perpendicular to building 120 are 118 and 117. - Fitzsimons General Hospital, Bounded by East Colfax to south, Peoria Street to west, Denver City/County & Adams County Line to north, & U.S. Route 255 to east, Aurora, Adams County, CO

  17. EDITORIAL: The 10th International Symposium on Measurement Technology and Intelligent Instruments (ISMTII 2011) The 10th International Symposium on Measurement Technology and Intelligent Instruments (ISMTII 2011)

    NASA Astrophysics Data System (ADS)

    Kim, Seung-Woo

    2012-05-01

    Measurement and instrumentation have long played an important role in production engineering, through supporting both the traditional field of manufacturing and the new field of micro/nanotechnology. Papers published in this special feature were selected and updated from those presented at The 10th International Symposium on Measurement Technology and Intelligent Instruments (ISMTII 2011) held at KAIST, Daejeon, South Korea, on 29 June-2 July 2011. ISMTII 2011 was organized by ICMI (The International Committee on Measurements and Instrumentation), Korean Society for Precision Engineering (KSPE), Japan Society for Precision Engineering (JSPE), Chinese Society for Measurement (CSM) and KAIST. The Symposium was also supported by the Korea BK21 Valufacture Institute of Mechanical Engineering at KAIST. A total of 225 papers, including four keynote papers, were presented at ISMTII 2011, covering a wide range of topics, including micro/nanometrology, precision measurement, online & in-process measurement, surface metrology, optical metrology & image processing, biomeasurement, sensor technology, intelligent measurement & instrumentation, uncertainty, traceability & calibration, and signal processing algorithms. The organizing members recommended publication of updated versions of some of the best ISMTII 2011 papers in this special feature of Measurement Science and Technology. As guest editor, I believe that this special feature presents the newest information on advances in measurement technology and intelligent instruments from basic research to applied systems for production engineering. I would like to thank all the authors for their great contributions to this special feature and the referees for their careful reviews of the papers. I would also like to express our thanks and appreciation to the publishing staff of MST for their dedicated efforts that have made this special feature possible.

  18. Under Construction: Benchmark Assessments and Common Core Math Implementation in Grades K-8. Formative Evaluation Cycle Report for the Math in Common Initiative, Volume 1

    ERIC Educational Resources Information Center

    Flaherty, John, Jr.; Sobolew-Shubin, Alexandria; Heredia, Alberto; Chen-Gaddini, Min; Klarin, Becca; Finkelstein, Neal D.

    2014-01-01

    Math in Common® (MiC) is a five-year initiative that supports a formal network of 10 California school districts as they implement the Common Core State Standards in mathematics (CCSS-M) across grades K-8. As the MiC initiative moves into its second year, one of the central activities that each of the districts is undergoing to support CCSS…

  19. Maternal Genetic Ancestry and Legacy of 10(th) Century AD Hungarians.

    PubMed

    Csősz, Aranka; Szécsényi-Nagy, Anna; Csákyová, Veronika; Langó, Péter; Bódis, Viktória; Köhler, Kitti; Tömöry, Gyöngyvér; Nagy, Melinda; Mende, Balázs Gusztáv

    2016-01-01

    The ancient Hungarians originated from the Ural region in today's central Russia and migrated across the Eastern European steppe, according to historical sources. The Hungarians conquered the Carpathian Basin 895-907 AD, and admixed with the indigenous communities. Here we present mitochondrial DNA results from three datasets: one from the Avar period (7(th)-9(th) centuries) of the Carpathian Basin (n = 31); one from the Hungarian conquest-period (n = 76); and a completion of the published 10(th)-12(th) century Hungarian-Slavic contact zone dataset by four samples. We compare these mitochondrial DNA hypervariable segment sequences and haplogroup results with published ancient and modern Eurasian data. Whereas the analyzed Avars represents a certain group of the Avar society that shows East and South European genetic characteristics, the Hungarian conquerors' maternal gene pool is a mixture of West Eurasian and Central and North Eurasian elements. Comprehensively analyzing the results, both the linguistically recorded Finno-Ugric roots and historically documented Turkic and Central Asian influxes had possible genetic imprints in the conquerors' genetic composition. Our data allows a complex series of historic and population genetic events before the formation of the medieval population of the Carpathian Basin, and the maternal genetic continuity between 10(th)-12(th) century and modern Hungarians. PMID:27633963

  20. Drug testing at the 10th Asian Games and 24th Seoul Olympic Games.

    PubMed

    Park, J; Park, S; Lho, D; Choo, H P; Chung, B; Yoon, C; Min, H; Choi, M J

    1990-01-01

    Drug testing (doping test) procedures in the 1986 10th Asian Olympic Games and 1988 24th Seoul Olympic Games are reported. The International Olympic Committee Medical Commission (IOC-MC) conducted its first doping tests at the 1968 Olympics in Grenoble. With the guidance of the International Olympic Committee (IOC), the Olympic Council of Asia (OCA) introduced doping tests at the 1986 10th Asian Olympic Games in Seoul, Korea, September 21st to October 5th, 1986. 585 samples were tested at the Doping Control Center, Korea Advanced Institute of Science and Technology (DCC/KAIST), for stimulants, narcotics, anabolic steroids, and beta-blockers by gas chromatography/mass spectrometry, high pressure liquid chromatography, and fluorescence polarization immunoassay. These tests covered about 100 different drugs and another 400 as metabolites in addition to pharmacologically related substances. For the Seoul Olympic Games from September 17 to October 2, 1988, the IOC-MC with the DCC/KAIST conducted doping tests on 1601 samples for stimulants, narcotics, beta-blockers, diuretics, and anabolic steroids using GC, HPLC, GC/MSD, GC/MS, LC/MS, and TDx.

  1. Exploring the Relationship between Virtual Learning Environment Preference, Use, and Learning Outcomes in 10th Grade Earth Science Students

    ERIC Educational Resources Information Center

    Lin, Ming-Chao; Tutwiler, M. Shane; Chang, Chun-Yen

    2011-01-01

    This study investigated the relationship between the use of a three-dimensional Virtual Reality Learning Environment for Field Trip (3DVLE[subscript (ft)]) system and the achievement levels of senior high school earth science students. The 3DVLE[subscript (ft)] system was presented in two separate formats: Teacher Demonstrated Based and Student…

  2. Successes with Reversing the Negative Student Attitudes Developed in Typical Biology Classes for 8th and 10th Grade Students

    ERIC Educational Resources Information Center

    Hacieminoglu, Esme; Ali, Mohamed Moustafa; Oztas, Fulya; Yager, Robert E.

    2016-01-01

    The purpose of this study is to compare changes in attitudes of students about their study of biology in the classes thought by five biology teachers who experienced an Iowa Chautauqua workshop with and two non-Chautauqua teachers who had no experience with any professional development program. The results indicated that there are significant…

  3. The Insertion of Local Wisdom into Instructional Materials of Bahasa Indonesia for 10th Grade Students in Senior High School

    ERIC Educational Resources Information Center

    Anggraini, Purwati; Kusniarti, Tuti

    2015-01-01

    This current study aimed at investigating Bahasa Indonesia textbooks with regards to local wisdom issues. The preliminary study was utilized as the basis for developing instructional materials of Bahasa Indonesia that are rich of characters. Bahasa Indonesia instructional materials containing local wisdoms not only equip students with broad…

  4. A Cross-Analysis of the Mathematics Teacher's Activity: An Example in a French 10th-Grade Class

    ERIC Educational Resources Information Center

    Robert, Aline; Rogalski, Janine

    2005-01-01

    The purpose of this paper is to contribute to the debate about how to tackle the issue of "the teacher in the teaching/learning process", and to propose a methodology for analysing the teacher's activity in the classroom, based on concepts used in the fields of the didactics of mathematics as well as in cognitive ergonomics. This methodology…

  5. A Learning Progression for Deepening Students' Understandings of Modern Genetics across the 5th-10th Grades

    ERIC Educational Resources Information Center

    Duncan, Ravit Golan; Rogat, Aaron D.; Yarden, Anat

    2009-01-01

    Over the past several decades, there has been a tremendous growth in our understanding of genetic phenomena and the intricate and complicated mechanisms that mediate genetic effects. Given the complexity of content in modern genetics and the inadequacy of current instructional methods and materials it seems that a more coherent and extensive…

  6. Space Commerce 1994 Forum: The 10th National Space Symposium. Proceedings report

    NASA Technical Reports Server (NTRS)

    Lipskin, Beth Ann (Editor); Patterson, Sara (Editor); Aragon, Larry (Editor); Brescia, David A. (Editor); Flannery, Jack (Editor); Mossey, Roberty (Editor); Regan, Christopher (Editor); Steeby, Kurt (Editor); Suhr, Stacy (Editor); Zimkas, Chuck (Editor)

    1994-01-01

    The theme of the 10th National Space Symposium was 'New Windows of Opportunity'. These proceedings cover the following: Business Trends in High Tech Commercialization; How to Succeed in Space Technology Business -- Making Dollars and Sense; Obstacles and Opportunities to Success in Technology Commercialization NASA's Commercial Technology Mission -- a New Way of Doing Business: Policy and Practices; Field Center Practices; Practices in Action -- A New Way: Implementation and Business Opportunities; Space Commerce Review; Windows of Opportunity; the International Space Station; Space Support Forum; Spacelift Update; Competitive Launch Capabilities; Supporting Life on Planet Earth; National Security Space Issues; NASA in the Balance; Earth and Space Observations -- Did We Have Cousins on Mars?; NASA: A New Vision for Science; and Space Technology Hall of Fame.

  7. Tuskegee Bioethics Center 10th anniversary presentation: "Commemorating 10 years: ethical perspectives on origin and destiny".

    PubMed

    Prograis, Lawrence J

    2010-08-01

    More than 70 years have passed since the beginning of the Public Health Service syphilis study in Tuskegee, Alabama, and it has been over a decade since President Bill Clinton formally apologized for it and held a ceremony for the Tuskegee study participants. The official launching of the Tuskegee University National Center for Bioethics in Research and Health Care took place two years after President Clinton's apology. How might we fittingly discuss the Center's 10th Anniversary and the topic 'Commemorating 10 Years: Ethical Perspectives on Origin and Destiny'? Over a decade ago, a series of writers, many of them African Americans, wrote a text entitled 'African-American Perspectives on Biomedical Ethics'; their text was partly responsible for a prolonged reflection by others to produce a subsequent work, 'African American Bioethics: Culture, Race and Identity'. What is the relationship between the discipline of bioethics and African American culture? This and related questions are explored in this commentary.

  8. 10th World IHEA and ECHE Joint Congress: health economics in the age of longevity.

    PubMed

    Jakovljevic, Mihajlo B; Getzen, Thomas E; Torbica, Aleksandra; Anegawa, Tomofumi

    2014-12-01

    The 10th consecutive World Health Economics conference was organized jointly by International Health Economics Association and European Conference on Health Economics Association and took place at The Trinity College, Dublin, Ireland in July 2014. It has attracted broad participation from the global professional community devoted to health economics teaching,research and policy applications. It has provided a forum for lively discussion on hot contemporary issues such as health expenditure projections, reimbursement regulations,health technology assessment, universal insurance coverage, demand and supply of hospital services, prosperity diseases, population aging and many others. The high-profile debate fostered by this meeting is likely to inspire further methodological advances worldwide and spreading of evidence-based policy practice from OECD towards emerging markets.

  9. Collaborating to Move Research Forward: Proceedings of the 10th Annual Bladder Cancer Think Tank

    PubMed Central

    Kamat, Ashish M.; Agarwal, Piyush; Bivalacqua, Trinity; Chisolm, Stephanie; Daneshmand, Sia; Doroshow, James H.; Efstathiou, Jason A.; Galsky, Matthew; Iyer, Gopa; Kassouf, Wassim; Shah, Jay; Taylor, John; Williams, Stephen B.; Quale, Diane Zipursky; Rosenberg, Jonathan E.

    2016-01-01

    The 10th Annual Bladder Cancer Think Tank was hosted by the Bladder Cancer Advocacy Network and brought together a multidisciplinary group of clinicians, researchers, representatives and Industry to advance bladder cancer research efforts. Think Tank expert panels, group discussions, and networking opportunities helped generate ideas and strengthen collaborations between researchers and physicians across disciplines and between institutions. Interactive panel discussions addressed a variety of timely issues: 1) data sharing, privacy and social media; 2) improving patient navigation through therapy; 3) promising developments in immunotherapy; 4) and moving bladder cancer research from bench to bedside. Lastly, early career researchers presented their bladder cancer studies and had opportunities to network with leading experts. PMID:27376139

  10. Sequenced Benchmarks for K-8 Science.

    ERIC Educational Resources Information Center

    Kendall, John S.; DeFrees, Keri L.; Richardson, Amy

    This document describes science benchmarks for grades K-8 in Earth and Space Science, Life Science, and Physical Science. Each subject area is divided into topics followed by a short content description and grade level information. Source documents for this paper included science content guides from California, Ohio, South Carolina, and South…

  11. Report on the 10th International Conference of the Asian Clinical Oncology Society (ACOS 2012).

    PubMed

    Kim, Yeul Hong; Yang, Han-Kwang; Kim, Tae Won; Lee, Jung Shin; Seong, Jinsil; Lee, Woo Yong; Ahn, Yong Chan; Lim, Ho Yeong; Won, Jong-Ho; Park, Kyong Hwa; Cho, Kyung Sam

    2013-04-01

    The 10th International Conference of the Asian Clinical Oncology Society (ACOS 2012) in conjunction with the 38th Annual Meeting of the Korean Cancer Association, was held on June 13 to 15 (3 days) 2012 at COEX Convention and Exhibition Center in Seoul, Korea. ACOS has a 20-year history starting from the first conference in Osaka, Japan, which was chaired by Prof. Tetsuo Taguchi and the ACOS conferences have since been conducted in Asian countries every 2 years. Under the theme of "Work Together to Make a Difference for Cancer Therapy in Asia", the 10th ACOS was prepared to discuss various subjects through a high-quality academic program, exhibition, and social events. The ACOS 2012 Committee was composed of the ACOS Organizing Committee, Honorary Advisors, Local Advisors, and ACOS 2012 Organizing Committee. The comprehensive academic program had a total of 92 sessions (3 Plenary Lectures, 1 Award Lectures, 1 Memorial Lectures, 9 Special Lectures, 15 Symposia, 1 Debate & Summary Sessions, 1 Case Conferences, 19 Educational Lectures, 1 Research & Development Session, 18 Satellite Symposia, 9 Meet the Professors, 14 Oral Presentations) and a total 292 presentations were delivered throughout the entire program. Amongst Free Papers, 462 research papers (110 oral presentations and 352 poster presentations) were selected to be presented. This conference was the largest of all ACOS conferences in its scale with around 1,500 participants from 30 countries. Furthermore, despite strict new financial policies and requirements governing fundraising alongside global economic stagnation, a total of 14 companies participated as sponsors and an additional 35 companies purchased 76 exhibition booths. Lastly, the conference social events provided attendees with a variety of opportunities to experience and enjoy Korea's rich culture and traditions during the Opening Ceremony, Welcome Reception, Invitee Dinner, Banquet, and Closing Ceremony. Overall, ACOS 2012 reinforced and promoted

  12. Benchmarking in Student Affairs.

    ERIC Educational Resources Information Center

    Mosier, Robert E.; Schwarzmueller, Gary J.

    2002-01-01

    Discusses the use of benchmarking in student affairs, focusing on issues related to student housing. Provides examples of how benchmarking has influenced administrative practice at many institutions. (EV)

  13. Updating the ACT College Readiness Benchmarks. ACT Research Report Series 2013 (6)

    ERIC Educational Resources Information Center

    Allen, Jeff

    2013-01-01

    The ACT College Readiness Benchmarks are the ACT® College Readiness Assessment scores associated with a 50% chance of earning a B or higher grade in typical first-year credit-bearing college courses. The Benchmarks also correspond to an approximate 75% chance of earning a C or higher grade in these courses. There are four Benchmarks, corresponding…

  14. Benchmarking and Performance Measurement.

    ERIC Educational Resources Information Center

    Town, J. Stephen

    This paper defines benchmarking and its relationship to quality management, describes a project which applied the technique in a library context, and explores the relationship between performance measurement and benchmarking. Numerous benchmarking methods contain similar elements: deciding what to benchmark; identifying partners; gathering…

  15. Lawrence Livermore plutonium button critical experiment benchmark

    SciTech Connect

    Trumble, E.F.; Justice, J.B.; Frost, R.L.

    1994-12-31

    The end of the Cold War and the subsequent weapons reductions have led to an increased need for the safe storage of large amounts of highly enriched plutonium. In support of code validation required to address this need, a set of critical experiments involving arrays of weapons-grade plutonium metal that were performed at the Lawrence Livermore National Laboratory (LLNL) in the late 1960s has been revisited. Although these experiments are well documented, discrepancies and omissions have been found in the earlier reports. Many of these have been resolved in the current work, and these data have been compiled into benchmark descriptions. In addition, a computational verification has been performed on the benchmarks using multiple computer codes. These benchmark descriptions are also being made available to the US Department of Energy (DOE)-sponsored Nuclear Criticality Safety Benchmark Evaluation Working Group for dissemination in the DOE Handbook on Evaluated Criticality Safety Benchmark Experiments.

  16. Report: Combustion Byproducts and Their Health Effects: Summary of the 10th International Congress

    PubMed Central

    Dellinger, Barry; D'Alessio, Antonio; D'Anna, Andrea; Ciajolo, Anna; Gullett, Brian; Henry, Heather; Keener, Mel; Lighty, JoAnn; Lomnicki, Slawomir; Lucas, Donald; Oberdörster, Günter; Pitea, Demetrio; Suk, William; Sarofim, Adel; Smith, Kirk R.; Stoeger, Tobias; Tolbert, Paige; Wyzga, Ron; Zimmermann, Ralf

    2008-01-01

    Abstract The 10th International Congress on Combustion Byproducts and their Health Effects was held in Ischia, Italy, from June 17–20, 2007. It is sponsored by the US NIEHS, NSF, Coalition for Responsible Waste Incineration (CRWI), and Electric Power Research Institute (EPRI). The congress focused on: the origin, characterization, and health impacts of combustion-generated fine and ultrafine particles; emissions of mercury and dioxins, and the development/application of novel analytical/diagnostic tools. The consensus of the discussion was that particle-associated organics, metals, and persistent free radicals (PFRs) produced by combustion sources are the likely source of the observed health impacts of airborne PM rather than simple physical irritation of the particles. Ultrafine particle-induced oxidative stress is a likely progenitor of the observed health impacts, but important biological and chemical details and possible catalytic cycles remain unresolved. Other key conclusions were: (1) In urban settings, 70% of airborne fine particles are a result of combustion emissions and 50% are due to primary emissions from combustion sources, (2) In addition to soot, combustion produces one, possibly two, classes of nanoparticles with mean diameters of ~10 nm and ~1 nm. (3) The most common metrics used to describe particle toxicity, viz. surface area, sulfate concentration, total carbon, and organic carbon, cannot fully explain observed health impacts, (4) Metals contained in combustion-generated ultrafine and fine particles mediate formation of toxic air pollutants such as PCDD/F and PFRs. (5) The combination of metal-containing nanoparticles, organic carbon compounds, and PFRs can lead to a cycle generating oxidative stress in exposed organisms. PMID:22476005

  17. 10th annual meeting of the Safety Pharmacology Society: an overview.

    PubMed

    Cavero, Icilio

    2011-03-01

    The 10th annual meeting of the Safety Pharmacology (SP) Society covered numerous topics of educational and practical research interest. Biopolymers - the theme of the keynote address - were presented as essential components of medical devices, diagnostic tools, biosensors, human tissue engineering and pharmaceutical formulations for optimized drug delivery. Toxicology and SP investigators - the topic of the Distinguished Service Award Lecture - were encouraged to collaborate in the development of SP technologies and protocols applicable to toxicology studies. Pharmaceutical companies, originally organizations bearing all risks for developing their portfolios, are increasingly moving towards fully integrated networks which outsource core activities (including SP studies) to large contract research organizations. Future nonclinical data are now expected to be of such high quality and predictability power that they may obviate the need for certain expensive and time-consuming clinical investigations. In this context, SP is called upon to extend its risk assessment purview to areas which currently are not systematically covered, such as drug-induced QRS interval prolongation, negative emotions and feelings (e.g., depression), and minor chronic cardiovascular and metabolic changes (e.g., as produced by drugs for type 2 diabetes) which can be responsible for delayed morbidity and mortality. The recently approved ICH S9 guidance relaxes the traditional regulatory SP package in order to accelerate the clinical access to anticancer drugs for patients with advanced malignancies. The novel FDA 'Animal Rule' guidance proposes that for clinical candidates with well-understood toxicities, marketing approval may be granted exclusively on efficacy data generated in animal studies as human clinical investigations for these types of drugs are either unfeasible or unethical. In conclusion, the core messages of this meeting are that SP should consistently operate according to the 'fit

  18. 10th annual meeting of the Safety Pharmacology Society: an overview.

    PubMed

    Cavero, Icilio

    2011-03-01

    The 10th annual meeting of the Safety Pharmacology (SP) Society covered numerous topics of educational and practical research interest. Biopolymers - the theme of the keynote address - were presented as essential components of medical devices, diagnostic tools, biosensors, human tissue engineering and pharmaceutical formulations for optimized drug delivery. Toxicology and SP investigators - the topic of the Distinguished Service Award Lecture - were encouraged to collaborate in the development of SP technologies and protocols applicable to toxicology studies. Pharmaceutical companies, originally organizations bearing all risks for developing their portfolios, are increasingly moving towards fully integrated networks which outsource core activities (including SP studies) to large contract research organizations. Future nonclinical data are now expected to be of such high quality and predictability power that they may obviate the need for certain expensive and time-consuming clinical investigations. In this context, SP is called upon to extend its risk assessment purview to areas which currently are not systematically covered, such as drug-induced QRS interval prolongation, negative emotions and feelings (e.g., depression), and minor chronic cardiovascular and metabolic changes (e.g., as produced by drugs for type 2 diabetes) which can be responsible for delayed morbidity and mortality. The recently approved ICH S9 guidance relaxes the traditional regulatory SP package in order to accelerate the clinical access to anticancer drugs for patients with advanced malignancies. The novel FDA 'Animal Rule' guidance proposes that for clinical candidates with well-understood toxicities, marketing approval may be granted exclusively on efficacy data generated in animal studies as human clinical investigations for these types of drugs are either unfeasible or unethical. In conclusion, the core messages of this meeting are that SP should consistently operate according to the 'fit

  19. Risk Communication and Public Education in Edmonton, Alberta, Canada on the 10th Anniversary of the "Black Friday" Tornado

    ERIC Educational Resources Information Center

    Blanchard-Boehm, R. Denise; Cook, M. Jeffrey

    2004-01-01

    In July 1997, on the 10th anniversary of the great "Black Friday" Tornado, city officials of Edmonton, the print and broadcast media, agencies dealing in emergency management, and the national weather organisation recounted stories of the 1987, F5 tornado that struck Edmonton on a holiday weekend. The information campaign also presented…

  20. 3 CFR 8938 - Proclamation 8938 of March 1, 2013. 10th Anniversary of the United States Department of Homeland...

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ..., BARACK OBAMA, President of the United States of America, by virtue of the authority vested in me by the... of the United States Department of Homeland Security 8938 Proclamation 8938 Presidential Documents Proclamations Proclamation 8938 of March 1, 2013 Proc. 8938 10th Anniversary of the United States......

  1. Proceedings of the International Conference on Mobile Learning 2014. (10th, Madrid, Spain, February 28-March 2, 2014)

    ERIC Educational Resources Information Center

    Sánchez, Inmaculada Arnedillo, Ed.; Isaías, Pedro, Ed.

    2014-01-01

    These proceedings contain the papers of the 10th International Conference on Mobile Learning 2014, which was organised by the International Association for Development of the Information Society, in Madrid, Spain, February 28-March 2, 2014. The Mobile Learning 2014 International Conference seeks to provide a forum for the presentation and…

  2. Students' Transition Experience in the 10th Year of Schooling: Perceptions That Contribute to Improving the Quality of Schools

    ERIC Educational Resources Information Center

    Torres, Ana Cristina; Mouraz, Ana

    2015-01-01

    The study followed students in their 10th year of schooling that entered a new secondary education school in order to examine their perceptions of their previous schools' work and of its relationship with the difficulties they experience when in the transition. The analysis of 155 completed questionnaires of previous students of nine basic…

  3. The NAS parallel benchmarks

    NASA Technical Reports Server (NTRS)

    Bailey, David (Editor); Barton, John (Editor); Lasinski, Thomas (Editor); Simon, Horst (Editor)

    1993-01-01

    A new set of benchmarks was developed for the performance evaluation of highly parallel supercomputers. These benchmarks consist of a set of kernels, the 'Parallel Kernels,' and a simulated application benchmark. Together they mimic the computation and data movement characteristics of large scale computational fluid dynamics (CFD) applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification - all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.

  4. [A modified retroperitoneal approach to the kidney in patients with a highly deformed thorax: obtaining a wide operative field through subperiosteal resection of the 10th, 11th and 12th ribs].

    PubMed

    Satoh, Yuji; Kanou, Takehiro; Takagi, Norito; Tokuda, Yuji; Uozumi, Jiro; Masaki, Zenjiro

    2005-07-01

    We herein report a technique which facilitates a retroperitoneal approach to the kidney in cases of highly deformed thorax due to kyphoscoliosis. The operation consists of a lumbar oblique incision with removal of the 11th rib, combined with the additional removal of the 12th and 10th ribs. Resection of the upper two ribs was performed subperiosteally, leaving the periosteum of the deep side untouched. However, the deep side periosteum of the 12th rib was incised caudal from the pleural margin in order to facilitate exposure of the diaphragm. The retroperitoneal space was entered through the tip of the 11th rib bed. The diaphragm was incised dorso-medially at a level 1 cm caudal from the lower margin of the pleura, to an extent necessary to enable the pleura together with the cranial diaphragm to be manoeuvred in an upward direction. Two cases with renal tuberculosis associated with high-grade kyphosis and one case with staghorn calculi accompanied with lordosis were operated on utilizing this technique. In the former two cases, the thoracic cage was in direct contact with the iliac bone and there was practically no space between the rib border and the iliac crest. This was also true of the third case, but the grade of deformity was not as extensive as in the former two cases. Removal of the 10th, 11th and 12th ribs could be achieved without injuring the pleura and a satisfactorily large operating field could thus be developed which enabled a simple nephrectomy to be performed without difficulty. The characteristic feature of the described approach is that resection of the 10th and 11th ribs is simply to facilitate manoevrability of the wound margin, without going through the rib bed. The technique could be advantageous in selected cases where there is a highly deformed thorax. PMID:16083038

  5. Operationalizing the Rubric: The Effect of Benchmark Selection on the Assessed Quality of Writing.

    ERIC Educational Resources Information Center

    Popp, Sharon E. Osborn; Ryan, Joseph M.; Thompson, Marilyn S.; Behrens, John T.

    The purposes of this study were to investigate the role of benchmark writing samples in direct assessment of writing and to examine the consequences of differential benchmark selection with a common writing rubric. The influences of discourse and grade level were also examined within the context of differential benchmark selection. Raters scored…

  6. The Nature and Predictive Validity of a Benchmark Assessment Program in an American Indian School District

    ERIC Educational Resources Information Center

    Payne, Beverly J. R.

    2013-01-01

    This mixed methods study explored the nature of a benchmark assessment program and how well the benchmark assessments predicted End-of-Grade (EOG) and End-of-Course (EOC) test scores in an American Indian school district. Five major themes were identified and used to develop a Dimensions of Benchmark Assessment Program Effectiveness model:…

  7. Is the 10th and 11th intercostal space a safe approach for percutaneous nephrostomy and nephrolithotomy?

    PubMed

    Muzrakchi, Ahmed Al; Szmigielski, W; Omar, Ahmed J S; Younes, Nagy M

    2004-01-01

    The aim of this study was to determine the rate of complications in percutaneous nephrostomy (PCN) and nephrolithotomy (PCNL) performed through the 11th and 10th intercostal spaces using our monitoring technique and to discuss the safety of the procedure. Out of 398 PCNs and PCNLs carried out during a 3-year period, 56 patients had 57 such procedures performed using an intercostal approach. The 11th intercostal route was used in 42 and the 10th in 15 cases. One patient had two separate nephrostomies performed through the 10th and 11th intercostal spaces. The technique utilizes bi-planar fluoroscopy with a combination of a conventional angiographic machine to provide anterior-posterior fluoroscopy and a C-arm mobile fluoroscopy machine to give a lateral view, displayed on two separate monitors. None of the patients had clinically significant thoracic or abdominal complications. Two patients had minor chest complications. Only one developed changes (plate atelectasis, elevation of the hemi-diaphragm) directly related to the nephrostomy (2%). The second patient had bilateral plate atelectasis and unilateral congestive lung changes after PCNL. These changes were not necessarily related to the procedure but rather to general anesthesia during nephrolithotomy. The authors consider PCN or PCNL through the intercostal approach a safe procedure with a negligible complication rate, provided that it is performed under bi-planar fluoroscopy, which allows determination of the skin entry point just below the level of pleural reflection and provides three-dimensional monitoring of advancement of the puncturing needle toward the target entry point. PMID:15383855

  8. Is the 10th and 11th Intercostal Space a Safe Approach for Percutaneous Nephrostomy and Nephrolithotomy?

    SciTech Connect

    Muzrakchi, Ahmed Al; Szmigielski, W. Omar, Ahmed J.S.; Younes, Nagy M.

    2004-09-15

    The aim of this study was to determine the rate of complications in percutaneous nephrostomy (PCN) and nephrolithotomy (PCNL) performed through the 11th and 10th intercostal spaces using our monitoring technique and to discuss the safety of the procedure. Out of 398 PCNs and PCNLs carried out during a 3-year period, 56 patients had 57 such procedures performed using an intercostal approach. The 11th intercostal route was used in 42 and the 10th in 15 cases. One patient had two separate nephrostomies performed through the 10th and 11th intercostal spaces. The technique utilizes bi-planar fluoroscopy with a combination of a conventional angiographic machine to provide anterior-posterior fluoroscopy and a C-arm mobile fluoroscopy machine to give a lateral view, displayed on two separate monitors. None of the patients had clinically significant thoracic or abdominal complications. Two patients had minor chest complications. Only one developed changes (plate atelectasis, elevation of the hemi-diaphragm) directly related to the nephrostomy (2%). The second patient had bilateral plate atelectasis and unilateral congestive lung changes after PCNL. These changes were not necessarily related to the procedure but rather to general anesthesia during nephrolithotomy. The authors consider PCN or PCNL through the intercostal approach a safe procedure with a negligible complication rate, provided that it is performed under bi-planar fluoroscopy, which allows determination of the skin entry point just below the level of pleural reflection and provides three-dimensional monitoring of advancement of the puncturing needle toward the target entry point.

  9. Fortified Settlements of the 9th and 10th Centuries ad in Central Europe: Structure, Function and Symbolism

    PubMed Central

    Herold, Hajnalka

    2012-01-01

    THE STRUCTURE, FUNCTION(S) and symbolism of early medieval (9th–10th centuries ad) fortified settlements from central Europe, in particular today’s Austria, Hungary, Czech Republic and Slovakia, are examined in this paper. It offers an overview of the current state of research together with new insights based on analysis of the site of Gars-Thunau in Lower Austria. Special emphasis is given to the position of the fortified sites in the landscape, to the elements of the built environment and their spatial organisation, as well as to graves within the fortified area. The region under study was situated on the SE border of the Carolingian (and later the Ottonian) Empire, with some of the discussed sites lying in the territory of the ‘Great Moravian Empire’ in the 9th and 10th centuries. These sites can therefore provide important comparative data for researchers working in other parts of the Carolingian Empire and neighbouring regions. PMID:23564981

  10. Benchmarking Tool Kit.

    ERIC Educational Resources Information Center

    Canadian Health Libraries Association.

    Nine Canadian health libraries participated in a pilot test of the Benchmarking Tool Kit between January and April, 1998. Although the Tool Kit was designed specifically for health libraries, the content and approach are useful to other types of libraries as well. Used to its full potential, benchmarking can provide a common measuring stick to…

  11. Thermal Performance Benchmarking (Presentation)

    SciTech Connect

    Moreno, G.

    2014-11-01

    This project will benchmark the thermal characteristics of automotive power electronics and electric motor thermal management systems. Recent vehicle systems will be benchmarked to establish baseline metrics, evaluate advantages and disadvantages of different thermal management systems, and identify areas of improvement to advance the state-of-the-art.

  12. Predicting Long-Term College Success through Degree Completion Using ACT[R] Composite Score, ACT Benchmarks, and High School Grade Point Average. ACT Research Report Series, 2012 (5)

    ERIC Educational Resources Information Center

    Radunzel, Justine; Noble, Julie

    2012-01-01

    This study compared the effectiveness of ACT[R] Composite score and high school grade point average (HSGPA) for predicting long-term college success. Outcomes included annual progress towards a degree (based on cumulative credit-bearing hours earned), degree completion, and cumulative grade point average (GPA) at 150% of normal time to degree…

  13. Verification and validation benchmarks.

    SciTech Connect

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-02-01

    Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of

  14. Stability and Change in Interests: A Longitudinal Study of Adolescents from Grades 8 through 12

    ERIC Educational Resources Information Center

    Tracey, Terence J. G.; Robbins, Steven B.; Hofsess, Christy D.

    2005-01-01

    The pattern of RIASEC interests and academic skills were assessed longitudinally from a large-scale national database at three time points: eight grade, 10th grade, and 12th grade. Validation and cross-validation samples of 1000 males and 1000 females in each set were used to test the pattern of these scores over time relative to mean changes,…

  15. 10th European VLBI Network Symposium and EVN Users Meeting: VLBI and the new generation of radio arrays

    NASA Astrophysics Data System (ADS)

    Jodrell Bank Centre for Astrophysics and the University of Manchester, on behalf of the European VLBI Consortium, will host the 10th European VLBI Network Symposium and the EVN Users Meeting from September 20th - 24th, 2010, entitled "VLBI and the new generation of radio arrays". The Symposium will be held at the University of Manchester, UK. At this conference the latest scientific results and technical developments from VLBI and e-VLBI results will be reported. The timing of this meeting coincides with the development of, and first results from a number of new and upgraded radio facilities around the globe, such as e-MERLIN, LOFAR, EVLA, ALMA, and the SKA pathfinders ASKAP and MeerKAT. This meeting will incorporate some of the first results from these new instruments, in addition to the unique scientific and technical contribution of VLBI in this new era of radio astronomy.

  16. The Interpretations and Applications of Boethius's Introduction to the Arithmetic II,1 at the End of the 10th Century

    NASA Astrophysics Data System (ADS)

    Otisk, Marek

    This paper deals with comments and glosses to the first chapter of the second book of Boethius's Introduction to Arithmetic from the last quarter of the 10th century. Those texts were written by Gerbert of Aurillac (Scholium ad Boethii Arithmeticam Institutionem l. II, c. 1), Abbo of Fleury (commentary on the Calculus by Victorius of Aquitaine, the so-called De numero, mensura et pondere), Notker of Liège (De superparticularibus) and by the anonymous author (De arithmetica Boetii). The main aim of this paper is to show that Boethius's statements about the converting numerical sequences to equality from this work could be interpreted minimally in two different ways. This paper discussed also the application of this topic in other liberal arts (like astronomy, music, grammar etc.) and in playing game called rithmomachia, the medieval philosophers' game.

  17. Report on the 10th European Fusion Physics Workshop (Vaals, The Netherlands, 9-11 December 2002)

    NASA Astrophysics Data System (ADS)

    Campbell, D. J.; Borba, D.; Bucalossi, J.; Moreau, D.; Sauter, O.; Stober, J.; Vayakis, G.

    2003-06-01

    The 10th European Fusion Physics Workshop took place in December 2002 at Vaals in The Netherlands, hosted by the Trilateral Euregio Cluster (TEC: Associations EURATOM-ERM/KMS, FZJ and FOM), and sponsored by the European Commission and the Foundation SOFT. Within an overall theme of `Operational limits in toroidal devices, with particular reference to steady-state operation', four topics of importance to the future development of magnetically confined fusion were discussed in detail. In addition, a review of the JET scientific and technical programme under EFDA and an assessment of ITER's measurement requirements and diagnostic development programme were presented. The main issues discussed and the areas identified as requiring further study are summarized here.

  18. Optical and microphysical properties of mineral dust and biomass burning aerosol observed over Warsaw on 10th July 2013

    NASA Astrophysics Data System (ADS)

    Janicka, Lucja; Stachlewska, Iwona; Veselovskii, Igor; Baars, Holger

    2016-04-01

    Biomass burning aerosol originating from Canadian forest fires was widely observed over Europe in July 2013. Favorable weather conditions caused long-term westward flow of smoke from Canada to Western and Central Europe. During this period, PollyXT lidar of the University of Warsaw took wavelength dependent measurements in Warsaw. On July 10th short event of simultaneous advection of Canadian smoke and Saharan dust was observed at different altitudes over Warsaw. Different origination of both air masses was indicated by backward trajectories from HYSPLIT model. Lidar measurements performed with various wavelength (1064, 532, 355 nm), using also Raman and depolarization channels for VIS and UV allowed for distinguishing physical differences of this two types of aerosols. Optical properties acted as input for retrieval of microphysical properties. Comparisons of microphysical and optical properties of biomass burning aerosols and mineral dust observed will be presented.

  19. Toxicological Benchmarks for Wildlife

    SciTech Connect

    Sample, B.E. Opresko, D.M. Suter, G.W.

    1993-01-01

    Ecological risks of environmental contaminants are evaluated by using a two-tiered process. In the first tier, a screening assessment is performed where concentrations of contaminants in the environment are compared to no observed adverse effects level (NOAEL)-based toxicological benchmarks. These benchmarks represent concentrations of chemicals (i.e., concentrations presumed to be nonhazardous to the biota) in environmental media (water, sediment, soil, food, etc.). While exceedance of these benchmarks does not indicate any particular level or type of risk, concentrations below the benchmarks should not result in significant effects. In practice, when contaminant concentrations in food or water resources are less than these toxicological benchmarks, the contaminants may be excluded from further consideration. However, if the concentration of a contaminant exceeds a benchmark, that contaminant should be retained as a contaminant of potential concern (COPC) and investigated further. The second tier in ecological risk assessment, the baseline ecological risk assessment, may use toxicological benchmarks as part of a weight-of-evidence approach (Suter 1993). Under this approach, based toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. Other sources of evidence include media toxicity tests, surveys of biota (abundance and diversity), measures of contaminant body burdens, and biomarkers. This report presents NOAEL- and lowest observed adverse effects level (LOAEL)-based toxicological benchmarks for assessment of effects of 85 chemicals on 9 representative mammalian wildlife species (short-tailed shrew, little brown bat, meadow vole, white-footed mouse, cottontail rabbit, mink, red fox, and whitetail deer) or 11 avian wildlife species (American robin, rough-winged swallow, American woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, barn owl, Cooper's hawk, and red-tailed hawk

  20. KRITZ-2 Experimental Benchmark Analysis

    SciTech Connect

    Pavlovichev, A.M.

    2001-09-28

    The KRITZ-2 experiment has been adopted by the OECD/NEA Task Force on Reactor-Based Plutonium Disposition for use as a benchmark exercise. The KRITZ-2 experiment consists of three different core configurations (one with near-weapons-grade MOX) with critical conditions a 20 C and 245 C. The KRITZ-2 experiment has calculated the MCU-REA code, which is a continuous energy Monte Carlo code system developed at the Russian Research Center--Kurchatov Institute and is used extensively in the Fissile Materials Disposition Program. The calculated results for k{sub eff} and fission rate distributions are compared with the experimental data and results of other codes. The results are in good agreement with the experimental values.

  1. TWODANT benchmark. Progress report

    SciTech Connect

    Lee, Sung

    1994-01-11

    TWODANT (Two-Dimensional, Diffusion-Accelerated, Neutral-Particle Transport) code has been benchmarked against 6 critical experiments (Jezebel plutonium critical assembly) and their k effective values compared with those of KENO and MCNP codes.

  2. Benchmarking for strategic action.

    PubMed

    Jennings, K; Westfall, F

    1992-01-01

    By focusing on three key elements--customer expectations, competitor strengths and vulnerabilities, and organizational competencies--a company's benchmarking effort can be designed to drive the strategic planning process.

  3. Benchmark problems and solutions

    NASA Technical Reports Server (NTRS)

    Tam, Christopher K. W.

    1995-01-01

    The scientific committee, after careful consideration, adopted six categories of benchmark problems for the workshop. These problems do not cover all the important computational issues relevant to Computational Aeroacoustics (CAA). The deciding factor to limit the number of categories to six was the amount of effort needed to solve these problems. For reference purpose, the benchmark problems are provided here. They are followed by the exact or approximate analytical solutions. At present, an exact solution for the Category 6 problem is not available.

  4. Determining SAT® Benchmarks for College Readiness. Research Notes. RN-30

    ERIC Educational Resources Information Center

    Kobrin, Jennifer L.

    2007-01-01

    The purpose of this research study was to determine benchmark scores on the SAT that predict a 65 percent probability or higher of getting a first-year college grade point average of either 2.7 or higher or 2.0 or higher, to use these benchmarks to describe the level of college readiness in the nation and in certain demographic subgroups, and to…

  5. The KMAT: Benchmarking Knowledge Management.

    ERIC Educational Resources Information Center

    de Jager, Martha

    Provides an overview of knowledge management and benchmarking, including the benefits and methods of benchmarking (e.g., competitive, cooperative, collaborative, and internal benchmarking). Arthur Andersen's KMAT (Knowledge Management Assessment Tool) is described. The KMAT is a collaborative benchmarking tool, designed to help organizations make…

  6. Differential Effects on Student Demographic Groups of Using ACT® College Readiness Assessment Composite Score, Act Benchmarks, and High School Grade Point Average for Predicting Long-Term College Success through Degree Completion. ACT Research Report Series, 2013 (5)

    ERIC Educational Resources Information Center

    Radunzel, Justine; Noble, Julie

    2013-01-01

    In this study, we evaluated the differential effects on racial/ethnic, family income, and gender groups of using ACT® College Readiness Assessment Composite score and high school grade point average (HSGPA) for predicting long-term college success. Outcomes included annual progress towards a degree (based on cumulative credit-bearing hours…

  7. XAFS study of copper and silver nanoparticles in glazes of medieval middle-east lustreware (10th-13th century)

    NASA Astrophysics Data System (ADS)

    Padovani, S.; Puzzovio, D.; Sada, C.; Mazzoldi, P.; Borgia, I.; Sgamellotti, A.; Brunetti, B. G.; Cartechini, L.; D'Acapito, F.; Maurizio, C.; Shokoui, F.; Oliaiy, P.; Rahighi, J.; Lamehi-Rachti, M.; Pantos, E.

    2006-06-01

    It has recently been shown that lustre decoration of medieval and Renaissance pottery consists of silver and copper nanoparticles dispersed in the glassy matrix of the ceramic glaze. Here the findings of an X-ray absorption fine structure (XAFS) study on lustred glazes of shards belonging to 10th and 13rd century pottery from the National Museum of Iran are reported. Absorption spectra in the visible range have been also measured in order to investigate the relations between colour and glaze composition. Gold colour is mainly due to Ag nanoparticles, though Ag+, Cu+ and Cu2+ ions can be also dispersed within the glassy matrix, with different ratios. Red colour is mainly due to Cu nanoparticles, although some Ag nanoparticles, Ag+ and Cu+ ions can be present. The achievement of metallic Cu and the absence of Cu2+ indicate a higher reduction of copper in red lustre. These findings are in substantial agreement with previous results on Italian Renaissance pottery. In spite of the large heterogeneity of cases, the presence of copper and silver ions in the glaze confirms that lustre formation is mediated by a copper- and silver-alkali ion exchange, followed by nucleation and growth of metal nanoparticles.

  8. Report on the 10th anniversary of international drug discovery science and technology conference, 8 - 10 november 2012, nanjing, china.

    PubMed

    Everett, Jeremy R

    2013-03-01

    The 10th Anniversary of International Drug Discovery Science and Technology (IDDST) Conference was held in Nanjing, China from 8 to 10 November 2012. The conference ran in parallel with the 2nd Annual Symposium of Drug Delivery Systems. Over 400 delegates from both conferences came together for the Opening Ceremony and Keynote Addresses but otherwise pursued separate paths in the huge facilities of the Nanjing International Expo Centre. The IDDST was arranged into 19 separate Chapters covering drug discovery biology, target validation, chemistry, rational drug design, pharmacology and toxicology, drug screening technology, 'omics' technologies, analytical, automation and enabling technologies, informatics, stem cells and regenerative medicine, bioprocessing, generics, biosimilars and biologicals and seven disease areas: cancer, CNS, respiratory and inflammation, autoimmune, emerging infectious, bone and orphan diseases. There were also two sessions of a 'Bench to Bedside to Business' Program and a Chinese Scientist programme. In each period of the IDDST conference, up to seven sessions were running in parallel. This Meeting Highlight samples just a fraction of the content of this large meeting. The talks included have as a link, the use of new approaches to drug discovery. Many other excellent talks could have been highlighted and the author has necessarily had to be selective.

  9. The 10th anniversary of the Junior Members and Affiliates of the European Academy of Allergy and Clinical Immunology.

    PubMed

    Skevaki, Chrysanthi L; Maggina, Paraskevi; Santos, Alexandra F; Rodrigues-Alves, Rodrigo; Antolin-Amerigo, Dario; Borrego, Luis Miguel; Bretschneider, Isabell; Butiene, Indre; Couto, Mariana; Fassio, Filippo; Gardner, James; Xatzipsalti, Maria; Hovhannisyan, Lilit; Hox, Valerie; Makrinioti, Heidi; O Neil, Serena E; Pala, Gianni; Rudenko, Michael; Santucci, Annalisa; Seys, Sven; Sokolowska, Milena; Whitaker, Paul; Heffler, Enrico

    2011-12-01

    This year is the 10th anniversary of the European Academy of Allergy and Clinical Immunology (EAACI) Junior Members and Affiliates (JMAs). The aim of this review is to highlight the work and activities of EAACI JMAs. To this end, we have summarized all the initiatives taken by JMAs during the last 10 yr. EAACI JMAs are currently a group of over 2380 clinicians and scientists under the age of 35 yr, who support the continuous education of the Academy's younger members. For the past decade, JMAs enjoy a steadily increasing number of benefits such as free online access to the Academy's journals, the possibility to apply for Fellowships and the Mentorship Program, travel grants to attend scientific meetings, and many more. In addition, JMAs have been involved in task forces, cooperation schemes with other scientific bodies, organization of JMA focused sessions during EAACI meetings, and participation in the activities of EAACI communication platforms. EAACI JMA activities represent an ideal example of recruiting, training, and educating young scientists in order for them to thrive as future experts in their field. This model may serve as a prototype for other scientific communities, several of which have already adapted similar policies.

  10. 10th of April 1987 seismic swarm: Correlation with geochemical parameters in Campi Flegrei Caldera (southern Italy)

    NASA Astrophysics Data System (ADS)

    Tedesco, Dario; Bottiglieri, Luisa; Pece, Raimondo

    1988-07-01

    A close relationship between geophysical activity (seismicity and ground deformation) and chemical changes in volcanic reservoirs has been proposed several times in active volcanic areas. In Campi Flegrei caldera, especially during the bradyseismic crisis which occurred between 1982-1984, this correlation was never clearly demonstrated because of the high rate of occurrence of earthquakes and the small number of gas samples. After at least two years of both geochemically and geophysically quiescent period, a swarm of 50 earthquakes, felt in the area of the Solfatara crater with 2.0 maximum magnitude, occurred on the 10th of April 1987. At about the same time (before and after), several geochemical parameters showed important changes in concentration. These include water vapour, nitrogen, hydrogen, methane and to a lesser extent hydrogen sulfide in fumarolic gases from Bocca Grande fumarole in the Solfatara crater and the radon content in water wells situated far from the swarm epicentral area. In our opinion, the processes causing the geochemical changes are linked to aseismic creeping mechanisms, which leads to an easier rising of fluids in fumaroles (H2O, N2, H2 and CH4) and in the superficial water table (Rn). The subsequent seismicity could be related to consequent local stress accumulation on gas reservoir rocks induced by creeping.

  11. Nebraska Vocational Agribusiness Curriculum for City Schools. Career Opportunities in Agribusiness. Basic Skill in Agribusiness. A Curriculum Guide. 10th Grade.

    ERIC Educational Resources Information Center

    Nebraska Univ., Lincoln. Dept. of Agricultural Education.

    Designed for use with high school sophomores, this agribusiness curriculum for city schools contains thirty-one units of instruction in the areas of career opportunities in agribusiness and vocational agribusiness skills. Among the units included are (1) Career Selection, (2) Parliamentary Procedure and Public Speaking, (3) Career Opportunities in…

  12. The Enlightenment Music Contract. 10th Grade Lesson. Schools of California Online Resources for Education (SCORE): Connecting California's Classrooms to the World.

    ERIC Educational Resources Information Center

    Kelly, Freda

    The "philosophes" of the Enlightenment Period were a group of free (different) thinkers who offered commentary on societal issues. Often, they were like one of today's social commentators suggesting reforms for the political system. Since the United States during the era of the Revolutionary War was seeking reform of what they considered English…

  13. The Effects of Game-Based Learning and Anticipation of a Test on the Learning Outcomes of 10th Grade Geology Students

    ERIC Educational Resources Information Center

    Chen, Chia-Li Debra; Yeh, Ting-Kuang; Chang, Chun-Yen

    2016-01-01

    This study examines whether a Role Play Game (RPG) with embedded geological contents and students' anticipation of an upcoming posttest significantly affect high school students' achievements of and attitudes toward geology. The participants of the study were comprised of 202 high school students, 103 males and 99 females. The students were…

  14. A Comparison Study of AVID and GEAR UP 10th-Grade Students in Two High Schools in the Rio Grande Valley of Texas

    ERIC Educational Resources Information Center

    Watt, Karen M.; Huerta, Jeffery; Lozano, Aliber

    2007-01-01

    This study examines 4 groups of high school students enrolled in 2 college preparatory programs, AVID and GEAR UP. Differences in student educational aspirations, expectations and anticipations, knowledge of college entrance requirements, knowledge of financial aid, and academic achievement in mathematics were examined. Adelman's (1999)…

  15. Carpenter, tractors and microbes for the development of logical-mathematical thinking - the way 10th graders and pre-service teachers solve thinking challenges

    NASA Astrophysics Data System (ADS)

    Gazit, Avikam

    2012-12-01

    The objective of this case study was to investigate the ability of 10th graders and pre-service teachers to solve logical-mathematical thinking challenges. The challenges do not require mathematical knowledge beyond that of primary school but rather an informed use of the problem representation. The percentage of correct answers given by the 10th graders was higher than that of the pre-service teachers. Unlike the 10th graders, some of whom used various strategies for representing the problem, most of the pre-service teachers' answers were based on a technical algorithm, without using control processes. The obvious conclusion drawn from the findings supports and recommends expanding and enhancing the development of logical-mathematical thinking, both in specific lessons and as an integral part of other lessons in pre-service frameworks.

  16. Surveys and Benchmarks

    ERIC Educational Resources Information Center

    Bers, Trudy

    2012-01-01

    Surveys and benchmarks continue to grow in importance for community colleges in response to several factors. One is the press for accountability, that is, for colleges to report the outcomes of their programs and services to demonstrate their quality and prudent use of resources, primarily to external constituents and governing boards at the state…

  17. Python/Lua Benchmarks

    SciTech Connect

    Busby, L.

    2014-08-01

    This is an adaptation of the pre-existing Scimark benchmark code to a variety of Python and Lua implementations. It also measures performance of the Fparser expression parser and C and C++ code on a variety of simple scientific expressions.

  18. Benchmarking the World's Best

    ERIC Educational Resources Information Center

    Tucker, Marc S.

    2012-01-01

    A century ago, the United States was a world leader in industrial benchmarking. However, after World War II, once no one could compete with the U.S., it became complacent. Many industrialized countries now have higher student achievement and more equitable and efficient education systems. A higher proportion of young people in their workforces…

  19. Monte Carlo Benchmark

    2010-10-20

    The "Monte Carlo Benchmark" (MCB) is intended to model the computatiional performance of Monte Carlo algorithms on parallel architectures. It models the solution of a simple heuristic transport equation using a Monte Carlo technique. The MCB employs typical features of Monte Carlo algorithms such as particle creation, particle tracking, tallying particle information, and particle destruction. Particles are also traded among processors using MPI calls.

  20. Comparison of five benchmarks

    SciTech Connect

    Huss, J. E.; Pennline, J. A.

    1987-02-01

    Five benchmark programs were obtained and run on the NASA Lewis CRAY X-MP/24. A comparison was made between the programs codes and between the methods for calculating performance figures. Several multitasking jobs were run to gain experience in how parallel performance is measured.

  1. Benchmarks: WICHE Region 2012

    ERIC Educational Resources Information Center

    Western Interstate Commission for Higher Education, 2013

    2013-01-01

    Benchmarks: WICHE Region 2012 presents information on the West's progress in improving access to, success in, and financing of higher education. The information is updated annually to monitor change over time and encourage its use as a tool for informed discussion in policy and education communities. To establish a general context for the…

  2. What Do 2nd and 10th Graders Have in Common? Worms and Technology: Using Technology to Collaborate across Boundaries

    ERIC Educational Resources Information Center

    Culver, Patti; Culbert, Angie; McEntyre, Judy; Clifton, Patrick; Herring, Donna F.; Notar, Charles E.

    2009-01-01

    The article is about the collaboration between two classrooms that enabled a second grade class to participate in a high school biology class. Through the use of modern video conferencing equipment, Mrs. Culbert, with the help of the Dalton State College Educational Technology Training Center (ETTC), set up a live, two way video and audio feed of…

  3. Grade Span.

    ERIC Educational Resources Information Center

    Renchler, Ron

    2000-01-01

    This issue reviews grade span, or grade configuration. Catherine Paglin and Jennifer Fager's "Grade Configuration: Who Goes Where?" provides an overview of issues and concerns related to grade spans and supplies profiles of eight Northwest schools with varying grade spans. David F. Wihry, Theodore Coladarci, and Curtis Meadow's "Grade Span and…

  4. Principles for an ETL Benchmark

    NASA Astrophysics Data System (ADS)

    Wyatt, Len; Caufield, Brian; Pol, Daniel

    Conditions in the marketplace for ETL tools suggest that an industry standard benchmark is needed. The benchmark should provide useful data for comparing the performance of ETL systems, be based on a meaningful scenario, and be scalable over a wide range of data set sizes. This paper gives a general scoping of the proposed benchmark and outlines some key decision points. The Transaction Processing Performance Council (TPC) has formed a development subcommittee to define and produce such a benchmark.

  5. Carpenter, Tractors and Microbes for the Development of Logical-Mathematical Thinking--The Way 10th Graders and Pre-Service Teachers Solve Thinking Challenges

    ERIC Educational Resources Information Center

    Gazit, Avikam

    2012-01-01

    The objective of this case study was to investigate the ability of 10th graders and pre-service teachers to solve logical-mathematical thinking challenges. The challenges do not require mathematical knowledge beyond that of primary school but rather an informed use of the problem representation. The percentage of correct answers given by the 10th…

  6. Advances in Classification Research. Volume 10. Proceedings of the ASIS SIG/CR Classification Research Workshop (10th, Washington, DC, November 1-5, 1999). ASIST Monograph Series.

    ERIC Educational Resources Information Center

    Albrechtsen, Hanne, Ed.; Mai, Jens-Erik, Ed.

    This volume is a compilation of the papers presented at the 10th ASIS (American Society for Information Science) workshop on classification research. Major themes include the social and cultural informatics of classification and coding systems, subject access and indexing theory, genre analysis and the agency of documents in the ordering of…

  7. Creating Cultures of Peace: Pedagogical Thought and Practice. Selected Papers from the 10th Triennial World Conference (September 10-15, 2001, Madrid, Spain)

    ERIC Educational Resources Information Center

    Benton, Jean E., Ed.; Swami, Piyush, Ed.

    2007-01-01

    The 10th Triennial World Conference of the World Council for Curriculum and Instruction (WCCI) was held September 10-15, 2001 in Madrid, Spain. The theme of the conference was "Cultures of Peace." Thirty-four papers and presentations are divided into nine sections. Part I, Tributes to the Founders of WCCI, includes: (1) Tribute to Alice Miel…

  8. Social Studies. Grade 10--European Culture Studies. 1975 Reprint.

    ERIC Educational Resources Information Center

    New York State Education Dept., Albany. Bureau of Secondary Curriculum Development.

    This 10th grade syllabus examines Western traditions historically. The topical organization of the material ranges from Europe Today--to illustrate the themes underlying Europe's cultural development--back to The Ancient Western World for an historical sequence through The Middle Ages, The Age of Transition, Modern Movements in Intellectual,…

  9. Validity of the International Classification of Diseases 10th revision code for hospitalisation with hyponatraemia in elderly patients

    PubMed Central

    Gandhi, Sonja; Shariff, Salimah Z; Fleet, Jamie L; Weir, Matthew A; Jain, Arsh K; Garg, Amit X

    2012-01-01

    Objective To evaluate the validity of the International Classification of Diseases, 10th Revision (ICD-10) diagnosis code for hyponatraemia (E87.1) in two settings: at presentation to the emergency department and at hospital admission. Design Population-based retrospective validation study. Setting Twelve hospitals in Southwestern Ontario, Canada, from 2003 to 2010. Participants Patients aged 66 years and older with serum sodium laboratory measurements at presentation to the emergency department (n=64 581) and at hospital admission (n=64 499). Main outcome measures Sensitivity, specificity, positive predictive value and negative predictive value comparing various ICD-10 diagnostic coding algorithms for hyponatraemia to serum sodium laboratory measurements (reference standard). Median serum sodium values comparing patients who were code positive and code negative for hyponatraemia. Results The sensitivity of hyponatraemia (defined by a serum sodium ≤132 mmol/l) for the best-performing ICD-10 coding algorithm was 7.5% at presentation to the emergency department (95% CI 7.0% to 8.2%) and 10.6% at hospital admission (95% CI 9.9% to 11.2%). Both specificities were greater than 99%. In the two settings, the positive predictive values were 96.4% (95% CI 94.6% to 97.6%) and 82.3% (95% CI 80.0% to 84.4%), while the negative predictive values were 89.2% (95% CI 89.0% to 89.5%) and 87.1% (95% CI 86.8% to 87.4%). In patients who were code positive for hyponatraemia, the median (IQR) serum sodium measurements were 123 (119–126) mmol/l and 125 (120–130) mmol/l in the two settings. In code negative patients, the measurements were 138 (136–140) mmol/l and 137 (135–139) mmol/l. Conclusions The ICD-10 diagnostic code for hyponatraemia differentiates between two groups of patients with distinct serum sodium measurements at both presentation to the emergency department and at hospital admission. However, these codes underestimate the true incidence of hyponatraemia

  10. 21st Century Curriculum: Does Auto-Grading Writing Actually Work?

    ERIC Educational Resources Information Center

    T.H.E. Journal, 2013

    2013-01-01

    The West Virginia Department of Education's auto grading initiative dates back to 2004--a time when school districts were making their first forays into automation. The Charleston based WVDE had instituted a statewide writing assessment in 1984 for students in fourth, seventh, and 10th grades and was looking to expand that program without…

  11. Radiography benchmark 2014

    SciTech Connect

    Jaenisch, G.-R. Deresch, A. Bellon, C.; Schumm, A.; Lucet-Sanchez, F.; Guerin, P.

    2015-03-31

    The purpose of the 2014 WFNDEC RT benchmark study was to compare predictions of various models of radiographic techniques, in particular those that predict the contribution of scattered radiation. All calculations were carried out for homogenous materials and a mono-energetic X-ray point source in the energy range between 100 keV and 10 MeV. The calculations were to include the best physics approach available considering electron binding effects. Secondary effects like X-ray fluorescence and bremsstrahlung production were to be taken into account if possible. The problem to be considered had two parts. Part I examined the spectrum and the spatial distribution of radiation behind a single iron plate. Part II considered two equally sized plates, made of iron and aluminum respectively, only evaluating the spatial distribution. Here we present the results of above benchmark study, comparing them to MCNP as the assumed reference model. The possible origins of the observed deviations are discussed.

  12. Benchmark Airport Charges

    NASA Technical Reports Server (NTRS)

    de Wit, A.; Cohn, N.

    1999-01-01

    The Netherlands Directorate General of Civil Aviation (DGCA) commissioned Hague Consulting Group (HCG) to complete a benchmark study of airport charges at twenty eight airports in Europe and around the world, based on 1996 charges. This study followed previous DGCA research on the topic but included more airports in much more detail. The main purpose of this new benchmark study was to provide insight into the levels and types of airport charges worldwide and into recent changes in airport charge policy and structure. This paper describes the 1996 analysis. It is intended that this work be repeated every year in order to follow developing trends and provide the most up-to-date information possible.

  13. Sequoia Messaging Rate Benchmark

    2008-01-22

    The purpose of this benchmark is to measure the maximal message rate of a single compute node. The first num_cores ranks are expected to reside on the 'core' compute node for which message rate is being tested. After that, the next num_nbors ranks are neighbors for the first core rank, the next set of num_nbors ranks are neighbors for the second core rank, and so on. For example, testing an 8-core node (num_cores = 8)more » with 4 neighbors (num_nbors = 4) requires 8 + 8 * 4 - 40 ranks. The first 8 of those 40 ranks are expected to be on the 'core' node being benchmarked, while the rest of the ranks are on separate nodes.« less

  14. Radiography benchmark 2014

    NASA Astrophysics Data System (ADS)

    Jaenisch, G.-R.; Deresch, A.; Bellon, C.; Schumm, A.; Lucet-Sanchez, F.; Guerin, P.

    2015-03-01

    The purpose of the 2014 WFNDEC RT benchmark study was to compare predictions of various models of radiographic techniques, in particular those that predict the contribution of scattered radiation. All calculations were carried out for homogenous materials and a mono-energetic X-ray point source in the energy range between 100 keV and 10 MeV. The calculations were to include the best physics approach available considering electron binding effects. Secondary effects like X-ray fluorescence and bremsstrahlung production were to be taken into account if possible. The problem to be considered had two parts. Part I examined the spectrum and the spatial distribution of radiation behind a single iron plate. Part II considered two equally sized plates, made of iron and aluminum respectively, only evaluating the spatial distribution. Here we present the results of above benchmark study, comparing them to MCNP as the assumed reference model. The possible origins of the observed deviations are discussed.

  15. MPI Multicore Linktest Benchmark

    2008-01-25

    The MPI Multicore Linktest (LinkTest) measures the aggregate bandwidth from/to a multicore node in a parallel system. It allows the user to specify a variety of different node layout and communication routine variations and reports the maximal observed bandwidth across all specified options. In particular, this benchmark is able to vary the number of tasks on the root node and thereby allows users to study the impact of multicore architectures on MPI communication performance.

  16. Algebraic Multigrid Benchmark

    SciTech Connect

    2013-05-06

    AMG2013 is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. It has been derived directly from the Boomer AMG solver in the hypre library, a large linear solvers library that is being developed in the Center for Applied Scientific Computing (CASC) at LLNL. The driver provided in the benchmark can build various test problems. The default problem is a Laplace type problem on an unstructured domain with various jumps and an anisotropy in one part.

  17. Benchmarking HIPAA compliance.

    PubMed

    Wagner, James R; Thoman, Deborah J; Anumalasetty, Karthikeyan; Hardre, Pat; Ross-Lazarov, Tsvetomir

    2002-01-01

    One of the nation's largest academic medical centers is benchmarking its operations using internally developed software to improve privacy/confidentiality of protected health information (PHI) and to enhance data security to comply with HIPAA regulations. It is also coordinating the development of a web-based interactive product that can help hospitals, physician practices, and managed care organizations measure their compliance with HIPAA regulations.

  18. Fifth Grade Level Science Sample Curriculum.

    ERIC Educational Resources Information Center

    Arkansas State Dept. of Education, Little Rock.

    This document presents a sample of the Arkansas science curriculum and identifies the content standards for physical science systems, life science systems, and Earth science/space science systems for fifth grade students. Each content standard is explained and includes student learning expectations, fifth grade benchmarks, assessments, and…

  19. Fourth Grade Level Science Sample Curriculum.

    ERIC Educational Resources Information Center

    Arkansas State Dept. of Education, Little Rock.

    This document presents a sample of the Arkansas science curriculum and identifies the content standards for physical science systems, life science systems, and Earth science/space science systems for fourth grade students. Each content standard is explained and includes student learning expectations, fourth grade benchmarks, assessments, and…

  20. Seventh Grade Level Science Sample Curriculum.

    ERIC Educational Resources Information Center

    Arkansas State Dept. of Education, Little Rock.

    This document presents a sample of the Arkansas science curriculum and identifies the content standards for physical science systems, life science systems, and Earth science/space science systems for seventh grade students. Each content standard is explained and includes student learning expectations, seventh grade benchmarks, assessments, and…

  1. First Grade Level Science Sample Curriculum.

    ERIC Educational Resources Information Center

    Arkansas State Dept. of Education, Little Rock.

    This document presents a sample of the Arkansas science curriculum and identifies the content standards for physical science systems, life science systems, and Earth science/space science systems for first grade students. Each content standard is explained and includes student learning expectations, first grade benchmarks, assessments, and…

  2. Sixth Grade Level Science Sample Curriculum.

    ERIC Educational Resources Information Center

    Arkansas State Dept. of Education, Little Rock.

    This document presents a sample of the Arkansas science curriculum and identifies the content standards for physical science systems, life science systems, and Earth science/space science systems for sixth grade students. Each content standard is explained and includes student learning expectations, sixth grade benchmarks, assessments, and…

  3. Eighth Grade Level Science Sample Curriculum.

    ERIC Educational Resources Information Center

    Arkansas State Dept. of Education, Little Rock.

    This document presents a sample of the Arkansas science curriculum and identifies the content standards for physical science systems, life science systems, and Earth science/space science systems for eighth grade students. Each content standard is explained and includes student learning expectations, eighth grade benchmarks, assessments, and…

  4. Second Grade Level Science Sample Curriculum.

    ERIC Educational Resources Information Center

    Arkansas State Dept. of Education, Little Rock.

    This document presents a sample of the Arkansas science curriculum and identifies the content standards for physical science systems, life science systems, and Earth science/space science systems for second grade students. Each content standard is explained and includes student learning expectations, second grade benchmarks, assessments, and…

  5. Third Grade Level Science Sample Curriculum.

    ERIC Educational Resources Information Center

    Arkansas State Dept. of Education, Little Rock.

    This document presents a sample of the Arkansas science curriculum and identifies the content standards for physical science systems, life science systems, and Earth science/space science systems for third grade students. Each content standard is explained and includes student learning expectations, third grade benchmarks, assessments, and…

  6. The NAS Parallel Benchmarks

    SciTech Connect

    Bailey, David H.

    2009-11-15

    The NAS Parallel Benchmarks (NPB) are a suite of parallel computer performance benchmarks. They were originally developed at the NASA Ames Research Center in 1991 to assess high-end parallel supercomputers. Although they are no longer used as widely as they once were for comparing high-end system performance, they continue to be studied and analyzed a great deal in the high-performance computing community. The acronym 'NAS' originally stood for the Numerical Aeronautical Simulation Program at NASA Ames. The name of this organization was subsequently changed to the Numerical Aerospace Simulation Program, and more recently to the NASA Advanced Supercomputing Center, although the acronym remains 'NAS.' The developers of the original NPB suite were David H. Bailey, Eric Barszcz, John Barton, David Browning, Russell Carter, LeoDagum, Rod Fatoohi, Samuel Fineberg, Paul Frederickson, Thomas Lasinski, Rob Schreiber, Horst Simon, V. Venkatakrishnan and Sisira Weeratunga. The original NAS Parallel Benchmarks consisted of eight individual benchmark problems, each of which focused on some aspect of scientific computing. The principal focus was in computational aerophysics, although most of these benchmarks have much broader relevance, since in a much larger sense they are typical of many real-world scientific computing applications. The NPB suite grew out of the need for a more rational procedure to select new supercomputers for acquisition by NASA. The emergence of commercially available highly parallel computer systems in the late 1980s offered an attractive alternative to parallel vector supercomputers that had been the mainstay of high-end scientific computing. However, the introduction of highly parallel systems was accompanied by a regrettable level of hype, not only on the part of the commercial vendors but even, in some cases, by scientists using the systems. As a result, it was difficult to discern whether the new systems offered any fundamental performance advantage

  7. Benchmarking: applications to transfusion medicine.

    PubMed

    Apelseth, Torunn Oveland; Molnar, Laura; Arnold, Emmy; Heddle, Nancy M

    2012-10-01

    Benchmarking is as a structured continuous collaborative process in which comparisons for selected indicators are used to identify factors that, when implemented, will improve transfusion practices. This study aimed to identify transfusion medicine studies reporting on benchmarking, summarize the benchmarking approaches used, and identify important considerations to move the concept of benchmarking forward in the field of transfusion medicine. A systematic review of published literature was performed to identify transfusion medicine-related studies that compared at least 2 separate institutions or regions with the intention of benchmarking focusing on 4 areas: blood utilization, safety, operational aspects, and blood donation. Forty-five studies were included: blood utilization (n = 35), safety (n = 5), operational aspects of transfusion medicine (n = 5), and blood donation (n = 0). Based on predefined criteria, 7 publications were classified as benchmarking, 2 as trending, and 36 as single-event studies. Three models of benchmarking are described: (1) a regional benchmarking program that collects and links relevant data from existing electronic sources, (2) a sentinel site model where data from a limited number of sites are collected, and (3) an institutional-initiated model where a site identifies indicators of interest and approaches other institutions. Benchmarking approaches are needed in the field of transfusion medicine. Major challenges include defining best practices and developing cost-effective methods of data collection. For those interested in initiating a benchmarking program, the sentinel site model may be most effective and sustainable as a starting point, although the regional model would be the ideal goal.

  8. Benchmark on outdoor scenes

    NASA Astrophysics Data System (ADS)

    Zhang, Hairong; Wang, Cheng; Chen, Yiping; Jia, Fukai; Li, Jonathan

    2016-03-01

    Depth super-resolution is becoming popular in computer vision, and most of test data is based on indoor data sets with ground-truth measurements such as Middlebury. However, indoor data sets mainly are acquired from structured light techniques under ideal conditions, which cannot represent the objective world with nature light. Unlike indoor scenes, the uncontrolled outdoor environment is much more complicated and is rich both in visual and depth texture. For that reason, we develop a more challenging and meaningful outdoor benchmark for depth super-resolution using the state-of-the-art active laser scanning system.

  9. Benchmarking emerging logic devices

    NASA Astrophysics Data System (ADS)

    Nikonov, Dmitri

    2014-03-01

    As complementary metal-oxide-semiconductor field-effect transistors (CMOS FET) are being scaled to ever smaller sizes by the semiconductor industry, the demand is growing for emerging logic devices to supplement CMOS in various special functions. Research directions and concepts of such devices are overviewed. They include tunneling, graphene based, spintronic devices etc. The methodology to estimate future performance of emerging (beyond CMOS) devices and simple logic circuits based on them is explained. Results of benchmarking are used to identify more promising concepts and to map pathways for improvement of beyond CMOS computing.

  10. Algebraic Multigrid Benchmark

    2013-05-06

    AMG2013 is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. It has been derived directly from the Boomer AMG solver in the hypre library, a large linear solvers library that is being developed in the Center for Applied Scientific Computing (CASC) at LLNL. The driver provided in the benchmark can build various test problems. The default problem is a Laplace type problem on an unstructured domain with various jumpsmore » and an anisotropy in one part.« less

  11. 2001 benchmarking guide.

    PubMed

    Hoppszallern, S

    2001-01-01

    Our fifth annual guide to benchmarking under managed care presents data that is a study in market dynamics and adaptation. New this year are financial indicators on HMOs exiting the market and those remaining. Hospital financial ratios and details on department performance are included. The physician group practice numbers show why physicians are scrutinizing capitated payments. Overall, hospitals in markets with high managed care penetration are more successful in managing labor costs and show productivity gains in imaging services, physical therapy and materials management.

  12. Self-benchmarking Guide for Data Centers: Metrics, Benchmarks, Actions

    SciTech Connect

    Mathew, Paul; Ganguly, Srirupa; Greenberg, Steve; Sartor, Dale

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in data centers. This guide is primarily intended for personnel who have responsibility for managing energy use in existing data centers - including facilities managers, energy managers, and their engineering consultants. Additionally, data center designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior data center benchmarking studies supported by the California Energy Commission. Much of the benchmarking data are drawn from the LBNL data center benchmarking database that was developed from these studies. Additional benchmark data were obtained from engineering experts including facility designers and energy managers. This guide also builds on recent research supported by the U.S. Department of Energy's Save Energy Now program.

  13. Report of the 10(th) Asia-Pacific Federation of Societies for Surgery of the Hand Congress (Organising Chair and Scientific Chair).

    PubMed

    A, Roohi Sharifah; Abdullah, Shalimar

    2016-10-01

    A report on the 10(th) Asia-Pacific Federation of Societies for the Surgery of the Hand and 6(th) Asia-Pacific Federation of Societies for Hand Therapists is submitted detailing the numbers of attendees participating, papers presented and support received as well the some of the challenges faced and how best to overcome them from the local conference chair and scientific chair point of view. PMID:27595972

  14. Report of the 10(th) Asia-Pacific Federation of Societies for Surgery of the Hand Congress (Organising Chair and Scientific Chair).

    PubMed

    A, Roohi Sharifah; Abdullah, Shalimar

    2016-10-01

    A report on the 10(th) Asia-Pacific Federation of Societies for the Surgery of the Hand and 6(th) Asia-Pacific Federation of Societies for Hand Therapists is submitted detailing the numbers of attendees participating, papers presented and support received as well the some of the challenges faced and how best to overcome them from the local conference chair and scientific chair point of view.

  15. Benchmarking concentrating photovoltaic systems

    NASA Astrophysics Data System (ADS)

    Duerr, Fabian; Muthirayan, Buvaneshwari; Meuret, Youri; Thienpont, Hugo

    2010-08-01

    Integral to photovoltaics is the need to provide improved economic viability. To achieve this goal, photovoltaic technology has to be able to harness more light at less cost. A large variety of concentrating photovoltaic concepts has provided cause for pursuit. To obtain a detailed profitability analysis, a flexible evaluation is crucial for benchmarking the cost-performance of this variety of concentrating photovoltaic concepts. To save time and capital, a way to estimate the cost-performance of a complete solar energy system is to use computer aided modeling. In this work a benchmark tool is introduced based on a modular programming concept. The overall implementation is done in MATLAB whereas Advanced Systems Analysis Program (ASAP) is used for ray tracing calculations. This allows for a flexible and extendable structuring of all important modules, namely an advanced source modeling including time and local dependence, and an advanced optical system analysis of various optical designs to obtain an evaluation of the figure of merit. An important figure of merit: the energy yield for a given photovoltaic system at a geographical position over a specific period, can be calculated.

  16. Grade Retention: What are the Costs and Benefits?

    ERIC Educational Resources Information Center

    Eide, Eric R.; Goldhaber, Dan D.

    2005-01-01

    Grade retention is a common practice used when students fail to meet required benchmarks. Therefore, it is important that we understand the relative benefits and costs associated with students repeating a grade. In this article we analyze the costs and benefits of grade retention. In our examination of retention, we obtain our calculations of the…

  17. A Benchmarking Model. Benchmarking Quality Performance in Vocational Technical Education.

    ERIC Educational Resources Information Center

    Losh, Charles

    The Skills Standards Projects have provided further emphasis on the need for benchmarking U.S. vocational-technical education (VTE) against international competition. Benchmarking is an ongoing systematic process designed to identify, as quantitatively as possible, those practices that produce world class performance. Metrics are those things that…

  18. Self-benchmarking Guide for Cleanrooms: Metrics, Benchmarks, Actions

    SciTech Connect

    Mathew, Paul; Sartor, Dale; Tschudi, William

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in laboratory buildings. This guide is primarily intended for personnel who have responsibility for managing energy use in existing laboratory facilities - including facilities managers, energy managers, and their engineering consultants. Additionally, laboratory planners and designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior research supported by the national Laboratories for the 21st Century (Labs21) program, supported by the U.S. Department of Energy and the U.S. Environmental Protection Agency. Much of the benchmarking data are drawn from the Labs21 benchmarking database and technical guides. Additional benchmark data were obtained from engineering experts including laboratory designers and energy managers.

  19. Self-benchmarking Guide for Laboratory Buildings: Metrics, Benchmarks, Actions

    SciTech Connect

    Mathew, Paul; Greenberg, Steve; Sartor, Dale

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in laboratory buildings. This guide is primarily intended for personnel who have responsibility for managing energy use in existing laboratory facilities - including facilities managers, energy managers, and their engineering consultants. Additionally, laboratory planners and designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior research supported by the national Laboratories for the 21st Century (Labs21) program, supported by the U.S. Department of Energy and the U.S. Environmental Protection Agency. Much of the benchmarking data are drawn from the Labs21 benchmarking database and technical guides. Additional benchmark data were obtained from engineering experts including laboratory designers and energy managers.

  20. SPEEDES benchmarking analysis

    NASA Astrophysics Data System (ADS)

    Capella, Sebastian J.; Steinman, Jeffrey S.; McGraw, Robert M.

    2002-07-01

    SPEEDES, the Synchronous Parallel Environment for Emulation and Discrete Event Simulation, is a software framework that supports simulation applications across parallel and distributed architectures. SPEEDES is used as a simulation engine in support of numerous defense projects including the Joint Simulation System (JSIMS), the Joint Modeling And Simulation System (JMASS), the High Performance Computing and Modernization Program's (HPCMP) development of a High Performance Computing (HPC) Run-time Infrastructure, and the Defense Modeling and Simulation Office's (DMSO) development of a Human Behavioral Representation (HBR) Testbed. This work documents some of the performance metrics obtained from benchmarking the SPEEDES Simulation Framework with respect to the functionality found in the summer of 2001. Specifically this papers the scalability of SPEEDES with respect to its time management algorithms and simulation object event queues with respect to the number of objects simulated and events processed.

  1. Benchmarking foreign electronics technologies

    SciTech Connect

    Bostian, C.W.; Hodges, D.A.; Leachman, R.C.; Sheridan, T.B.; Tsang, W.T.; White, R.M.

    1994-12-01

    This report has been drafted in response to a request from the Japanese Technology Evaluation Center`s (JTEC) Panel on Benchmarking Select Technologies. Since April 1991, the Competitive Semiconductor Manufacturing (CSM) Program at the University of California at Berkeley has been engaged in a detailed study of quality, productivity, and competitiveness in semiconductor manufacturing worldwide. The program is a joint activity of the College of Engineering, the Haas School of Business, and the Berkeley Roundtable on the International Economy, under sponsorship of the Alfred P. Sloan Foundation, and with the cooperation of semiconductor producers from Asia, Europe and the United States. Professors David A. Hodges and Robert C. Leachman are the project`s Co-Directors. The present report for JTEC is primarily based on data and analysis drawn from that continuing program. The CSM program is being conducted by faculty, graduate students and research staff from UC Berkeley`s Schools of Engineering and Business, and Department of Economics. Many of the participating firms are represented on the program`s Industry Advisory Board. The Board played an important role in defining the research agenda. A pilot study was conducted in 1991 with the cooperation of three semiconductor plants. The research plan and survey documents were thereby refined. The main phase of the CSM benchmarking study began in mid-1992 and will continue at least through 1997. reports are presented on the manufacture of integrated circuits; data storage; wireless technology; human-machine interfaces; and optoelectronics. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.

  2. Benchmarking monthly homogenization algorithms

    NASA Astrophysics Data System (ADS)

    Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.

    2011-08-01

    The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data

  3. Internal Benchmarking for Institutional Effectiveness

    ERIC Educational Resources Information Center

    Ronco, Sharron L.

    2012-01-01

    Internal benchmarking is an established practice in business and industry for identifying best in-house practices and disseminating the knowledge about those practices to other groups in the organization. Internal benchmarking can be done with structures, processes, outcomes, or even individuals. In colleges or universities with multicampuses or a…

  4. Benchmark for Strategic Performance Improvement.

    ERIC Educational Resources Information Center

    Gohlke, Annette

    1997-01-01

    Explains benchmarking, a total quality management tool used to measure and compare the work processes in a library with those in other libraries to increase library performance. Topics include the main groups of upper management, clients, and staff; critical success factors for each group; and benefits of benchmarking. (Author/LRW)

  5. The NAS kernel benchmark program

    NASA Technical Reports Server (NTRS)

    Bailey, D. H.; Barton, J. T.

    1985-01-01

    A collection of benchmark test kernels that measure supercomputer performance has been developed for the use of the NAS (Numerical Aerodynamic Simulation) program at the NASA Ames Research Center. This benchmark program is described in detail and the specific ground rules are given for running the program as a performance test.

  6. FireHose Streaming Benchmarks

    2015-01-27

    The FireHose Streaming Benchmarks are a suite of stream-processing benchmarks defined to enable comparison of streaming software and hardware, both quantitatively vis-a-vis the rate at which they can process data, and qualitatively by judging the effort involved to implement and run the benchmarks. Each benchmark has two parts. The first is a generator which produces and outputs datums at a high rate in a specific format. The second is an analytic which reads the streammore » of datums and is required to perform a well-defined calculation on the collection of datums, typically to find anomalous datums that have been created in the stream by the generator. The FireHose suite provides code for the generators, sample code for the analytics (which users are free to re-implement in their own custom frameworks), and a precise definition of each benchmark calculation.« less

  7. Applications of Integral Benchmark Data

    SciTech Connect

    Giuseppe Palmiotti; Teruhiko Kugo; Fitz Trumble; Albert C. Kahler; Dale Lancaster

    2014-10-09

    The International Reactor Physics Experiment Evaluation Project (IRPhEP) and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) provide evaluated integral benchmark data that may be used for validation of reactor physics / nuclear criticality safety analytical methods and data, nuclear data testing, advanced modeling and simulation, and safety analysis licensing activities. The handbooks produced by these programs are used in over 30 countries. Five example applications are presented in this paper: (1) Use of IRPhEP Data in Uncertainty Analyses and Cross Section Adjustment, (2) Uncertainty Evaluation Methods for Reactor Core Design at JAEA Using Reactor Physics Experimental Data, (3) Application of Benchmarking Data to a Broad Range of Criticality Safety Problems, (4) Cross Section Data Testing with ICSBEP Benchmarks, and (5) Use of the International Handbook of Evaluated Reactor Physics Benchmark Experiments to Support the Power Industry.

  8. FireHose Streaming Benchmarks

    SciTech Connect

    Karl Anderson, Steve Plimpton

    2015-01-27

    The FireHose Streaming Benchmarks are a suite of stream-processing benchmarks defined to enable comparison of streaming software and hardware, both quantitatively vis-a-vis the rate at which they can process data, and qualitatively by judging the effort involved to implement and run the benchmarks. Each benchmark has two parts. The first is a generator which produces and outputs datums at a high rate in a specific format. The second is an analytic which reads the stream of datums and is required to perform a well-defined calculation on the collection of datums, typically to find anomalous datums that have been created in the stream by the generator. The FireHose suite provides code for the generators, sample code for the analytics (which users are free to re-implement in their own custom frameworks), and a precise definition of each benchmark calculation.

  9. Benchmarking in water project analysis

    NASA Astrophysics Data System (ADS)

    Griffin, Ronald C.

    2008-11-01

    The with/without principle of cost-benefit analysis is examined for the possible bias that it brings to water resource planning. Theory and examples for this question are established. Because benchmarking against the demonstrably low without-project hurdle can detract from economic welfare and can fail to promote efficient policy, improvement opportunities are investigated. In lieu of the traditional, without-project benchmark, a second-best-based "difference-making benchmark" is proposed. The project authorizations and modified review processes instituted by the U.S. Water Resources Development Act of 2007 may provide for renewed interest in these findings.

  10. Phase-covariant quantum benchmarks

    NASA Astrophysics Data System (ADS)

    Calsamiglia, J.; Aspachs, M.; Muñoz-Tapia, R.; Bagan, E.

    2009-05-01

    We give a quantum benchmark for teleportation and quantum storage experiments suited for pure and mixed test states. The benchmark is based on the average fidelity over a family of phase-covariant states and certifies that an experiment cannot be emulated by a classical setup, i.e., by a measure-and-prepare scheme. We give an analytical solution for qubits, which shows important differences with standard state estimation approach, and compute the value of the benchmark for coherent and squeezed states, both pure and mixed.

  11. Issues in Benchmark Metric Selection

    NASA Astrophysics Data System (ADS)

    Crolotte, Alain

    It is true that a metric can influence a benchmark but will esoteric metrics create more problems than they will solve? We answer this question affirmatively by examining the case of the TPC-D metric which used the much debated geometric mean for the single-stream test. We will show how a simple choice influenced the benchmark and its conduct and, to some extent, DBMS development. After examining other alternatives our conclusion is that the “real” measure for a decision-support benchmark is the arithmetic mean.

  12. A Heterogeneous Medium Analytical Benchmark

    SciTech Connect

    Ganapol, B.D.

    1999-09-27

    A benchmark, called benchmark BLUE, has been developed for one-group neutral particle (neutron or photon) transport in a one-dimensional sub-critical heterogeneous plane parallel medium with surface illumination. General anisotropic scattering is accommodated through the Green's Function Method (GFM). Numerical Fourier transform inversion is used to generate the required Green's functions which are kernels to coupled integral equations that give the exiting angular fluxes. The interior scalar flux is then obtained through quadrature. A compound iterative procedure for quadrature order and slab surface source convergence provides highly accurate benchmark qualities (4- to 5- places of accuracy) results.

  13. California commercial building energy benchmarking

    SciTech Connect

    Kinney, Satkartar; Piette, Mary Ann

    2003-07-01

    Building energy benchmarking is the comparison of whole-building energy use relative to a set of similar buildings. It provides a useful starting point for individual energy audits and for targeting buildings for energy-saving measures in multiple-site audits. Benchmarking is of interest and practical use to a number of groups. Energy service companies and performance contractors communicate energy savings potential with ''typical'' and ''best-practice'' benchmarks while control companies and utilities can provide direct tracking of energy use and combine data from multiple buildings. Benchmarking is also useful in the design stage of a new building or retrofit to determine if a design is relatively efficient. Energy managers and building owners have an ongoing interest in comparing energy performance to others. Large corporations, schools, and government agencies with numerous facilities also use benchmarking methods to compare their buildings to each other. The primary goal of Task 2.1.1 Web-based Benchmarking was the development of a web-based benchmarking tool, dubbed Cal-Arch, for benchmarking energy use in California commercial buildings. While there were several other benchmarking tools available to California consumers prior to the development of Cal-Arch, there were none that were based solely on California data. Most available benchmarking information, including the Energy Star performance rating, were developed using DOE's Commercial Building Energy Consumption Survey (CBECS), which does not provide state-level data. Each database and tool has advantages as well as limitations, such as the number of buildings and the coverage by type, climate regions and end uses. There is considerable commercial interest in benchmarking because it provides an inexpensive method of screening buildings for tune-ups and retrofits. However, private companies who collect and manage consumption data are concerned that the identities of building owners might be revealed and

  14. Phase-covariant quantum benchmarks

    SciTech Connect

    Calsamiglia, J.; Aspachs, M.; Munoz-Tapia, R.; Bagan, E.

    2009-05-15

    We give a quantum benchmark for teleportation and quantum storage experiments suited for pure and mixed test states. The benchmark is based on the average fidelity over a family of phase-covariant states and certifies that an experiment cannot be emulated by a classical setup, i.e., by a measure-and-prepare scheme. We give an analytical solution for qubits, which shows important differences with standard state estimation approach, and compute the value of the benchmark for coherent and squeezed states, both pure and mixed.

  15. Change and Continuity in Student Achievement from Grades 3 to 5: A Policy Dilemma

    ERIC Educational Resources Information Center

    McCaslin, Mary; Burross, Heidi Legg; Good, Thomas L.

    2005-01-01

    In this article we examine student performance on mandated tests in grades 3, 4, and 5 in one state. We focus on this interval, which w e term "the fourth grade window," based on our hypothesis that students in grade four are particularly vulnerable to decrements in achievement. The national focus on the third grade as the critical benchmark in…

  16. Benchmarking without ground truth

    NASA Astrophysics Data System (ADS)

    Santini, Simone

    2006-01-01

    Many evaluation techniques for content based image retrieval are based on the availability of a ground truth, that is on a "correct" categorization of images so that, say, if the query image is of category A, only the returned images in category A will be considered as "hits." Based on such a ground truth, standard information retrieval measures such as precision and recall and given and used to evaluate and compare retrieval algorithms. Coherently, the assemblers of benchmarking data bases go to a certain length to have their images categorized. The assumption of the existence of a ground truth is, in many respect, naive. It is well known that the categorization of the images depends on the a priori (from the point of view of such categorization) subdivision of the semantic field in which the images are placed (a trivial observation: a plant subdivision for a botanist is very different from that for a layperson). Even within a given semantic field, however, categorization by human subjects is subject to uncertainty, and it makes little statistical sense to consider the categorization given by one person as the unassailable ground truth. In this paper I propose two evaluation techniques that apply to the case in which the ground truth is subject to uncertainty. In this case, obviously, measures such as precision and recall as well will be subject to uncertainty. The paper will explore the relation between the uncertainty in the ground truth and that in the most commonly used evaluation measures, so that the measurements done on a given system can preserve statistical significance.

  17. Benchmarking hypercube hardware and software

    NASA Technical Reports Server (NTRS)

    Grunwald, Dirk C.; Reed, Daniel A.

    1986-01-01

    It was long a truism in computer systems design that balanced systems achieve the best performance. Message passing parallel processors are no different. To quantify the balance of a hypercube design, an experimental methodology was developed and the associated suite of benchmarks was applied to several existing hypercubes. The benchmark suite includes tests of both processor speed in the absence of internode communication and message transmission speed as a function of communication patterns.

  18. Data-Intensive Benchmarking Suite

    2008-11-26

    The Data-Intensive Benchmark Suite is a set of programs written for the study of data-or storage-intensive science and engineering problems, The benchmark sets cover: general graph searching (basic and Hadoop Map/Reduce breadth-first search), genome sequence searching, HTTP request classification (basic and Hadoop Map/Reduce), low-level data communication, and storage device micro-beachmarking

  19. Research on computer systems benchmarking

    NASA Technical Reports Server (NTRS)

    Smith, Alan Jay (Principal Investigator)

    1996-01-01

    This grant addresses the topic of research on computer systems benchmarking and is more generally concerned with performance issues in computer systems. This report reviews work in those areas during the period of NASA support under this grant. The bulk of the work performed concerned benchmarking and analysis of CPUs, compilers, caches, and benchmark programs. The first part of this work concerned the issue of benchmark performance prediction. A new approach to benchmarking and machine characterization was reported, using a machine characterizer that measures the performance of a given system in terms of a Fortran abstract machine. Another report focused on analyzing compiler performance. The performance impact of optimization in the context of our methodology for CPU performance characterization was based on the abstract machine model. Benchmark programs are analyzed in another paper. A machine-independent model of program execution was developed to characterize both machine performance and program execution. By merging these machine and program characterizations, execution time can be estimated for arbitrary machine/program combinations. The work was continued into the domain of parallel and vector machines, including the issue of caches in vector processors and multiprocessors. All of the afore-mentioned accomplishments are more specifically summarized in this report, as well as those smaller in magnitude supported by this grant.

  20. Implementation of Benchmarking Transportation Logistics Practices and Future Benchmarking Organizations

    SciTech Connect

    Thrower, A.W.; Patric, J.; Keister, M.

    2008-07-01

    The purpose of the Office of Civilian Radioactive Waste Management's (OCRWM) Logistics Benchmarking Project is to identify established government and industry practices for the safe transportation of hazardous materials which can serve as a yardstick for design and operation of OCRWM's national transportation system for shipping spent nuclear fuel and high-level radioactive waste to the proposed repository at Yucca Mountain, Nevada. The project will present logistics and transportation practices and develop implementation recommendations for adaptation by the national transportation system. This paper will describe the process used to perform the initial benchmarking study, highlight interim findings, and explain how these findings are being implemented. It will also provide an overview of the next phase of benchmarking studies. The benchmarking effort will remain a high-priority activity throughout the planning and operational phases of the transportation system. The initial phase of the project focused on government transportation programs to identify those practices which are most clearly applicable to OCRWM. These Federal programs have decades of safe transportation experience, strive for excellence in operations, and implement effective stakeholder involvement, all of which parallel OCRWM's transportation mission and vision. The initial benchmarking project focused on four business processes that are critical to OCRWM's mission success, and can be incorporated into OCRWM planning and preparation in the near term. The processes examined were: transportation business model, contract management/out-sourcing, stakeholder relations, and contingency planning. More recently, OCRWM examined logistics operations of AREVA NC's Business Unit Logistics in France. The next phase of benchmarking will focus on integrated domestic and international commercial radioactive logistic operations. The prospective companies represent large scale shippers and have vast experience in

  1. HLA typing with monoclonal antibodies: evaluation of 356 HLA monoclonal antibodies including 181 studied during the 10th International Histocompatibility Workshop.

    PubMed

    Colombani, J; Lepage, V; Raffoux, C; Colombani, M

    1989-08-01

    During the 10th International Histocompatibility Workshop (10th WS), 181 HLA MoAbs were studied using lymphocytotoxicity micro-technique (LCT) and/or enzyme immuno-assay (EIA), and their capacity to serve as typing reagents was evaluated. 129 MoAbs were tested by both techniques. Results obtained with 92 class I and 86 class II polymorphic MoAbs (10th WS) were compared to published data concerning 180 class I and 176 class II polymorphic MoAbs, listed in an HLA-MoAbs Register maintained in our laboratory. The following conclusions can be proposed: 1/HLA-A, B typing by LCT with MoAbs is possible for about 14 specificities. Some specificities are clearly recognized (HLA-A3, B8, B13, Bw4, Bw6), others are recognized as cross-reacting groups (B7+27+w22+40), others are not currently recognized by any MoAb with restricted specificity (B5, B15). Several MoAbs confirmed the existence of shared epitopes between products from a single locus (A2-A28, A25-A32), or from A and B loci (A2-B17, Bw4-A9-A32). A single HLA-Cw MoAb has been described. 2/HLA class II typing by LCT with MoAbs is more difficult than class I typing. DR2, DR3, DR4, DR5 and DR7 as well as DRw52 and DRw53 are well defined; other DR specificities are poorly or not at all defined. Particular associations (DR1+DR4, DR3+DRw6, all DR except DR7) are recognized by several MoAbs. All DQw specificities are well recognized, including new specificities defined only by MoAbs: WA (DQw4), TA10 (DQw7), 2B3 (DQw6+w8+w9). Only two HLA-DP MoAbs have been described. 3/Satisfactory results, similar to those of LCT, were obtained with EIA using lymphoid cell lines as targets. 4/Human MoAbs (12 in the Register) are satisfactory typing reagents. They could represent in the future a significant contribution to HLA typing with MoAbs. PMID:2609328

  2. NHT-1 I/O Benchmarks

    NASA Technical Reports Server (NTRS)

    Carter, Russell; Ciotti, Bob; Fineberg, Sam; Nitzbert, Bill

    1992-01-01

    The NHT-1 benchmarks am a set of three scalable I/0 benchmarks suitable for evaluating the I/0 subsystems of high performance distributed memory computer systems. The benchmarks test application I/0, maximum sustained disk I/0, and maximum sustained network I/0. Sample codes are available which implement the benchmarks.

  3. How to avoid 'death by benchmarking'.

    PubMed

    Wofford, Dave; Libby, Darin

    2015-08-01

    Hospitals and health systems should adopt four key principles and practices when applying benchmarks to determine physician compensation: Acknowledge that a lower percentile may be appropriate. Use the median as the all-in benchmark. Use peer benchmarks when available. Use alternative benchmarks.

  4. Taking Dual Enrollment Deeper: Supports for the "Forgotten Middle" in a Tenth Grade Classroom

    ERIC Educational Resources Information Center

    Leonard, Jack

    2010-01-01

    This qualitative research case study examined the supports required for 31 academically average 10th grade students to succeed on three dual enrollment college courses. Conceptually, support was a team effort, with contributions considered from administrators, faculty, parents and students. The paper documents support contributions from all four…

  5. State Summary Grade 10: Spring 1989 High School Proficiency Test, New Jersey Statewide Testing System.

    ERIC Educational Resources Information Center

    New Jersey State Dept. of Education, Trenton.

    The New Jersey High School Proficiency Test (HSPT) consists of reading, writing, and mathematics sections and must be passed as one of the requirements for a high school diploma. This report includes a series of tables summarizing grade 10 test results statewide for April 11-13, 1989. The results for 6,352 10th graders are given separately for…

  6. [Hygienic assessment of lifestyle and health status in 10th-11th-form pupils directed to have a higher medical education].

    PubMed

    Timoshenko, K T

    2008-01-01

    Ninety-seven pupils from the 10th-to-11th classes formed on a competitive basis for intensive education, for forming motivation for future medical profession were examined using a set of psychophysiological tests that could evaluate the central nervous and cardiovascular systems, psychophysiological adaptation, task performance, and personality traits. The vast majority of the examinees were found to follow the hygienic recommendation of the day regimen, which corresponded to the principles of healthy lifestyle. In 99% of the pupils, mental capacity was rated as fair (66%) and high (33%), as evidenced by psychophysiological testing. Fifty-six per cent of the examinees were observed to have mental adaptive disorders that might reflect age-related psychological immaturity in them at the completing stage of schooling.

  7. Perceptions of High Achieving African American/Black 10th Graders from a Low Socioeconomic Community Regarding Health Scientists and Desired Careers

    PubMed Central

    Boekeloo, Bradley; Randolph, Suzanne; Timmons-Brown, Stephanie; Wang, Min Qi

    2014-01-01

    Measures are needed to assess youth perceptions about health science careers to facilitate research aimed at facilitating youth pursuit of health science. Although the Indiana Instrument provides an established measure of perceptions regarding nursing and ideal careers, we were interested in learning how high achieving 10th graders from relatively low socioeconomic areas who identify as Black/African American (Black) perceive health science and ideal careers. The Indiana Instrument was modified, administered to 90 youth of interest, and psychometrically analyzed. Reliable subscales were identified that may facilitate parsimonious, theoretical, and reliable study of youth decision-making regarding health science careers. Such research may help to develop and evaluate strategies for increasing the number of minority health scientists. PMID:25194058

  8. [Hygienic assessment of lifestyle and health status in 10th-11th-form pupils directed to have a higher medical education].

    PubMed

    Timoshenko, K T

    2008-01-01

    Ninety-seven pupils from the 10th-to-11th classes formed on a competitive basis for intensive education, for forming motivation for future medical profession were examined using a set of psychophysiological tests that could evaluate the central nervous and cardiovascular systems, psychophysiological adaptation, task performance, and personality traits. The vast majority of the examinees were found to follow the hygienic recommendation of the day regimen, which corresponded to the principles of healthy lifestyle. In 99% of the pupils, mental capacity was rated as fair (66%) and high (33%), as evidenced by psychophysiological testing. Fifty-six per cent of the examinees were observed to have mental adaptive disorders that might reflect age-related psychological immaturity in them at the completing stage of schooling. PMID:19097437

  9. The association between problematic parental substance use and adolescent substance use in an ethnically diverse sample of 9th and 10th graders.

    PubMed

    Shorey, Ryan C; Fite, Paula J; Elkins, Sara R; Frissell, Kevin C; Tortolero, Susan R; Stuart, Gregory L; Temple, Jeff R

    2013-12-01

    Adolescents of parents who use substances are at an increased risk for substance use themselves. Both parental monitoring and closeness have been shown to mediate the relationship between parents' and their adolescents' substance use. However, we know little about whether these relationships vary across different substances used by adolescents. Using structural equation modeling, we examined these associations within a racially and ethnically diverse sample of 9th and 10th graders (N = 927). Path analyses indicated that maternal closeness partially mediated the association between maternal problematic substance use and adolescent alcohol use. Parental monitoring partially mediated the relationship between paternal problematic substance use and adolescent alcohol, cigarette, marijuana, inhalant, and illicit prescription drug use. These results were consistent across gender and race/ethnicity. These findings suggest that parental interventions designed to increase closeness and monitoring may help to reduce adolescent substance use.

  10. The Association Between Problematic Parental Substance Use and Adolescent Substance Use in an Ethnically Diverse Sample of 9th and 10th Graders

    PubMed Central

    Shorey, Ryan C.; Fite, Paula J.; Elkins, Sara R.; Frissell, Kevin C.; Tortolero, Susan R.; Stuart, Gregory L.; Temple, Jeff R.

    2013-01-01

    Adolescents of parents who use substances are at an increased risk for substance use themselves. Both parental monitoring and closeness have been shown to mediate the relationship between parents’ and their adolescents’ substance use. However, we know little about whether these relationships vary across different substances used by adolescents. Using structural equation modeling, we examined these associations within a racially and ethnically diverse sample of 9th and 10th graders (N = 927). Path analyses indicated that maternal closeness partially mediated the association between maternal problematic substance use and adolescent alcohol use. Parental monitoring partially mediated the relationship between paternal problematic substance use and adolescent alcohol, cigarette, marijuana, inhalant, and illicit prescription drug use. These results were consistent across gender and race/ethnicity. These findings suggest that parental interventions designed to increase closeness and monitoring may help to reduce adolescent substance use. PMID:24006209

  11. Potential use of biomarkers in acute kidney injury: report and summary of recommendations from the 10th Acute Dialysis Quality Initiative consensus conference.

    PubMed

    Murray, Patrick T; Mehta, Ravindra L; Shaw, Andrew; Ronco, Claudio; Endre, Zoltan; Kellum, John A; Chawla, Lakhmir S; Cruz, Dinna; Ince, Can; Okusa, Mark D

    2014-03-01

    Over the last decade there has been considerable progress in the discovery and development of biomarkers of kidney disease, and several have now been evaluated in different clinical settings. Although there is a growing literature on the performance of various biomarkers in clinical studies, there is limited information on how these biomarkers would be utilized by clinicians to manage patients with acute kidney injury (AKI). Recognizing this gap in knowledge, we convened the 10th Acute Dialysis Quality Initiative meeting to review the literature on biomarkers in AKI and their application in clinical practice. We asked an international group of experts to assess four broad areas for biomarker utilization for AKI: risk assessment, diagnosis, and staging; differential diagnosis; prognosis and management; and novel physiological techniques including imaging. This article provides a summary of the key findings and recommendations of the group, to equip clinicians to effectively use biomarkers in AKI.

  12. Grade Configuration

    ERIC Educational Resources Information Center

    Williamson, Ronald

    2012-01-01

    Where to locate the 7th and 8th grade is a perennial question. While there are many variations, three approaches are most often used---include them in a 7-12 secondary campus, maintain a separate middle grades campus, or include them as part of a K-8 program. Research says that grade configuration is inconclusive at best and there is no research…

  13. Incorporating Ninth-Grade PSAT/NMSQT® Scores into AP Potential™ Predictions for AP® European History and AP World History. Statistical Report 2014-1

    ERIC Educational Resources Information Center

    Zhang, Xiuyuan; Patel, Priyank; Ewing, Maureen

    2015-01-01

    Historically, AP Potential™ correlations and expectancy tables have been based on 10th-and 11th-grade PSAT/NMSQT® examinees and 11th-and 12th-grade AP® examinees for all subjects (Zhang, Patel, & Ewing,2014; Ewing, Camara, & Millsap, 2006; Camara & Millsap, 1998). However, a large number of students take AP European History and AP…

  14. Closed-Loop Neuromorphic Benchmarks.

    PubMed

    Stewart, Terrence C; DeWolf, Travis; Kleinhans, Ashley; Eliasmith, Chris

    2015-01-01

    Evaluating the effectiveness and performance of neuromorphic hardware is difficult. It is even more difficult when the task of interest is a closed-loop task; that is, a task where the output from the neuromorphic hardware affects some environment, which then in turn affects the hardware's future input. However, closed-loop situations are one of the primary potential uses of neuromorphic hardware. To address this, we present a methodology for generating closed-loop benchmarks that makes use of a hybrid of real physical embodiment and a type of "minimal" simulation. Minimal simulation has been shown to lead to robust real-world performance, while still maintaining the practical advantages of simulation, such as making it easy for the same benchmark to be used by many researchers. This method is flexible enough to allow researchers to explicitly modify the benchmarks to identify specific task domains where particular hardware excels. To demonstrate the method, we present a set of novel benchmarks that focus on motor control for an arbitrary system with unknown external forces. Using these benchmarks, we show that an error-driven learning rule can consistently improve motor control performance across a randomly generated family of closed-loop simulations, even when there are up to 15 interacting joints to be controlled. PMID:26696820

  15. Benchmark simulation models, quo vadis?

    PubMed

    Jeppsson, U; Alex, J; Batstone, D J; Benedetti, L; Comas, J; Copp, J B; Corominas, L; Flores-Alsina, X; Gernaey, K V; Nopens, I; Pons, M-N; Rodríguez-Roda, I; Rosen, C; Steyer, J-P; Vanrolleghem, P A; Volcke, E I P; Vrecko, D

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to provide their insights, highlighting areas where knowledge may still be deficient and where new opportunities are emerging, and to propose potential avenues for future development and application of the general benchmarking framework and its associated tools. The paper focuses on the topics of temporal and spatial extension, process modifications within the WWTP, the realism of models, control strategy extensions and the potential for new evaluation tools within the existing benchmark system. We find that there are major opportunities for application within all of these areas, either from existing work already being done within the context of the benchmarking simulation models (BSMs) or applicable work in the wider literature. Of key importance is increasing capability, usability and transparency of the BSM package while avoiding unnecessary complexity.

  16. Closed-Loop Neuromorphic Benchmarks

    PubMed Central

    Stewart, Terrence C.; DeWolf, Travis; Kleinhans, Ashley; Eliasmith, Chris

    2015-01-01

    Evaluating the effectiveness and performance of neuromorphic hardware is difficult. It is even more difficult when the task of interest is a closed-loop task; that is, a task where the output from the neuromorphic hardware affects some environment, which then in turn affects the hardware's future input. However, closed-loop situations are one of the primary potential uses of neuromorphic hardware. To address this, we present a methodology for generating closed-loop benchmarks that makes use of a hybrid of real physical embodiment and a type of “minimal” simulation. Minimal simulation has been shown to lead to robust real-world performance, while still maintaining the practical advantages of simulation, such as making it easy for the same benchmark to be used by many researchers. This method is flexible enough to allow researchers to explicitly modify the benchmarks to identify specific task domains where particular hardware excels. To demonstrate the method, we present a set of novel benchmarks that focus on motor control for an arbitrary system with unknown external forces. Using these benchmarks, we show that an error-driven learning rule can consistently improve motor control performance across a randomly generated family of closed-loop simulations, even when there are up to 15 interacting joints to be controlled. PMID:26696820

  17. Relationship between Grade Span Configuration and Academic Achievement

    ERIC Educational Resources Information Center

    Dove, Mary Jane; Pearson, L. Carolyn; Hooper, Herbert

    2010-01-01

    The relationship between grade span configuration and academic achievement of 6th-grade students as measured by the Arkansas Benchmark Examination, which is the approved NCLB criterion-referenced annual assessment, was examined. The results of a one-between two-within analysis of variance for the 3-year state-wide study of 6th graders' combined…

  18. Radiation Detection Computational Benchmark Scenarios

    SciTech Connect

    Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.; McDonald, Ben S.

    2013-09-24

    Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing different techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to PNNL for

  19. National healthcare capital project benchmarking--an owner's perspective.

    PubMed

    Kahn, Noah

    2009-01-01

    Few sectors of the economy have been left unscathed in these economic times. Healthcare construction has been less affected than residential and nonresidential construction sectors, but driven by re-evaluation of healthcare system capital plans, projects are now being put on hold or canceled. The industry is searching for ways to improve the value proposition for project delivery and process controls. In other industries, benchmarking component costs has led to significant, sustainable reductions in costs and cost variations. Kaiser Permanente and the Construction Industry Institute (CII), a research component of the University of Texas at Austin, an industry leader in benchmarking, have joined with several other organizations to work on a national benchmarking and metrics program to gauge the performance of healthcare facility projects. This initiative will capture cost, schedule, delivery method, change, functional, operational, and best practice metrics. This program is the only one of its kind. The CII Web-based interactive reporting system enables a company to view its information and mine industry data. Benchmarking is a tool for continuous improvement that is capable not only of grading outcomes; it can inform all aspects of the healthcare design and construction process and ultimately help moderate the increasing cost of delivering healthcare.

  20. Using ACT Assessment Scores to Set Benchmarks for College Readiness. ACT Research Report Series, 2005-3

    ERIC Educational Resources Information Center

    Allen, Jeff; Sconing, Jim

    2005-01-01

    In this report, we establish benchmarks of readiness for four common first-year college courses: English Composition, College Algebra, Social Science, and Biology. Using grade data from a large sample of colleges, we modeled the probability of success in these courses as a function of ACT test scores. Success was defined as a course grade of B or…

  1. Symbolic manipulation and transport benchmarks

    SciTech Connect

    Ganapol, B.D.

    1986-01-01

    The establishment of reliable benchmark solutions is an integral part of the development of computational algorithms to solve the Boltzmann equation of particle motion. These solutions provide standards by which code developers can assess new numerical algorithms as well as ensure proper programming. A transport benchmark solution, as defined here, is the accurate numerical evaluation (3 to 5 digits) of an analytical solution to the transport equation. The basic elements of such a solution are an analytical representation free from discretization and a numerical evaluation for which an error estimate can be obtained. Symbolic manipulation software such as REDUCE, MACSYMA, and SMP can greatly aid in the generation of benchmark solutions. The benefit of these manipulators lies both in their ability to perform lengthy algebraic calculations and to write a code that can be incorporated directly into existing programs. Using two fundamental problems from particle transport theory, the author explores the advantages and limitations of the application of the REDUCE software package in generating time dependent benchmark solutions.

  2. A comparison of five benchmarks

    NASA Technical Reports Server (NTRS)

    Huss, Janice E.; Pennline, James A.

    1987-01-01

    Five benchmark programs were obtained and run on the NASA Lewis CRAY X-MP/24. A comparison was made between the programs codes and between the methods for calculating performance figures. Several multitasking jobs were run to gain experience in how parallel performance is measured.

  3. Processor Emulator with Benchmark Applications

    SciTech Connect

    Lloyd, G. Scott; Pearce, Roger; Gokhale, Maya

    2015-11-13

    A processor emulator and a suite of benchmark applications have been developed to assist in characterizing the performance of data-centric workloads on current and future computer architectures. Some of the applications have been collected from other open source projects. For more details on the emulator and an example of its usage, see reference [1].

  4. PRISMATIC CORE COUPLED TRANSIENT BENCHMARK

    SciTech Connect

    J. Ortensi; M.A. Pope; G. Strydom; R.S. Sen; M.D. DeHart; H.D. Gougar; C. Ellis; A. Baxter; V. Seker; T.J. Downar; K. Vierow; K. Ivanov

    2011-06-01

    The Prismatic Modular Reactor (PMR) is one of the High Temperature Reactor (HTR) design concepts that have existed for some time. Several prismatic units have operated in the world (DRAGON, Fort St. Vrain, Peach Bottom) and one unit is still in operation (HTTR). The deterministic neutronics and thermal-fluids transient analysis tools and methods currently available for the design and analysis of PMRs have lagged behind the state of the art compared to LWR reactor technologies. This has motivated the development of more accurate and efficient tools for the design and safety evaluations of the PMR. In addition to the work invested in new methods, it is essential to develop appropriate benchmarks to verify and validate the new methods in computer codes. The purpose of this benchmark is to establish a well-defined problem, based on a common given set of data, to compare methods and tools in core simulation and thermal hydraulics analysis with a specific focus on transient events. The benchmark-working group is currently seeking OECD/NEA sponsorship. This benchmark is being pursued and is heavily based on the success of the PBMR-400 exercise.

  5. PyMPI Dynamic Benchmark

    2007-02-16

    Pynamic is a benchmark designed to test a system's ability to handle the Dynamic Linking and Loading (DLL) requirements of Python-based scientific applications. This benchmark is developed to add a workload to our testing environment, a workload that represents a newly emerging class of DLL behaviors. Pynamic buildins on pyMPI, and MPI extension to Python C-extension dummy codes and a glue layer that facilitates linking and loading of the generated dynamic modules into the resultingmore » pyMPI. Pynamic is configurable, enabling modeling the static properties of a specific code as described in section 5. It does not, however, model any significant computationss of the target and hence, it is not subjected to the same level of control as the target code. In fact, HPC computer vendors and tool developers will be encouraged to add it to their tesitn suite once the code release is completed. an ability to produce and run this benchmark is an effective test for valifating the capability of a compiler and linker/loader as well as an OS kernel and other runtime system of HPC computer vendors. In addition, the benchmark is designed as a test case for stressing code development tools. Though Python has recently gained popularity in the HPC community, it heavy DLL operations have hindered certain HPC code development tools, notably parallel debuggers, from performing optimally.« less

  6. Benchmarks for industrial energy efficiency

    SciTech Connect

    Amarnath, K.R.; Kumana, J.D.; Shah, J.V.

    1996-12-31

    What are the standards for improving energy efficiency for industries such as petroleum refining, chemicals, and glass manufacture? How can different industries in emerging markets and developing accelerate the pace of improvements? This paper discusses several case studies and experiences relating to this subject emphasizing the use of energy efficiency benchmarks. Two important benchmarks are discussed. The first is based on a track record of outstanding performers in the related industry segment; the second benchmark is based on site specific factors. Using energy use reduction targets or benchmarks, projects have been implemented in Mexico, Poland, India, Venezuela, Brazil, China, Thailand, Malaysia, Republic of South Africa and Russia. Improvements identified through these projects include a variety of recommendations. The use of oxy-fuel and electric furnaces in the glass industry in Poland; reconfiguration of process heat recovery systems for refineries in China, Malaysia, and Russia; recycling and reuse of process wastewater in Republic of South Africa; cogeneration plant in Venezuela. The paper will discuss three case studies of efforts undertaken in emerging market countries to improve energy efficiency.

  7. Real-Time Benchmark Suite

    1992-01-17

    This software provides a portable benchmark suite for real time kernels. It tests the performance of many of the system calls, as well as the interrupt response time and task response time to interrupts. These numbers provide a baseline for comparing various real-time kernels and hardware platforms.

  8. Benchmark Lisp And Ada Programs

    NASA Technical Reports Server (NTRS)

    Davis, Gloria; Galant, David; Lim, Raymond; Stutz, John; Gibson, J.; Raghavan, B.; Cheesema, P.; Taylor, W.

    1992-01-01

    Suite of nonparallel benchmark programs, ELAPSE, designed for three tests: comparing efficiency of computer processing via Lisp vs. Ada; comparing efficiencies of several computers processing via Lisp; or comparing several computers processing via Ada. Tests efficiency which computer executes routines in each language. Available for computer equipped with validated Ada compiler and/or Common Lisp system.

  9. Austin Community College Benchmarking Update.

    ERIC Educational Resources Information Center

    Austin Community Coll., TX. Office of Institutional Effectiveness.

    Austin Community College contracted with MGT of America, Inc. in spring 1999 to develop a peer and benchmark (best) practices analysis on key indicators. These indicators were updated in spring 2002 using data from eight Texas community colleges and four non-Texas institutions that represent large, comprehensive, urban community colleges, similar…

  10. Benchmarking short sequence mapping tools

    PubMed Central

    2013-01-01

    Background The development of next-generation sequencing instruments has led to the generation of millions of short sequences in a single run. The process of aligning these reads to a reference genome is time consuming and demands the development of fast and accurate alignment tools. However, the current proposed tools make different compromises between the accuracy and the speed of mapping. Moreover, many important aspects are overlooked while comparing the performance of a newly developed tool to the state of the art. Therefore, there is a need for an objective evaluation method that covers all the aspects. In this work, we introduce a benchmarking suite to extensively analyze sequencing tools with respect to various aspects and provide an objective comparison. Results We applied our benchmarking tests on 9 well known mapping tools, namely, Bowtie, Bowtie2, BWA, SOAP2, MAQ, RMAP, GSNAP, Novoalign, and mrsFAST (mrFAST) using synthetic data and real RNA-Seq data. MAQ and RMAP are based on building hash tables for the reads, whereas the remaining tools are based on indexing the reference genome. The benchmarking tests reveal the strengths and weaknesses of each tool. The results show that no single tool outperforms all others in all metrics. However, Bowtie maintained the best throughput for most of the tests while BWA performed better for longer read lengths. The benchmarking tests are not restricted to the mentioned tools and can be further applied to others. Conclusion The mapping process is still a hard problem that is affected by many factors. In this work, we provided a benchmarking suite that reveals and evaluates the different factors affecting the mapping process. Still, there is no tool that outperforms all of the others in all the tests. Therefore, the end user should clearly specify his needs in order to choose the tool that provides the best results. PMID:23758764

  11. 42 CFR 440.385 - Delivery of benchmark and benchmark-equivalent coverage through managed care entities.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Delivery of benchmark and benchmark-equivalent...: GENERAL PROVISIONS Benchmark Benefit and Benchmark-Equivalent Coverage § 440.385 Delivery of benchmark and benchmark-equivalent coverage through managed care entities. In implementing benchmark or...

  12. 42 CFR 440.385 - Delivery of benchmark and benchmark-equivalent coverage through managed care entities.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 4 2013-10-01 2013-10-01 false Delivery of benchmark and benchmark-equivalent...: GENERAL PROVISIONS Benchmark Benefit and Benchmark-Equivalent Coverage § 440.385 Delivery of benchmark and benchmark-equivalent coverage through managed care entities. In implementing benchmark or...

  13. 42 CFR 440.385 - Delivery of benchmark and benchmark-equivalent coverage through managed care entities.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 4 2012-10-01 2012-10-01 false Delivery of benchmark and benchmark-equivalent...: GENERAL PROVISIONS Benchmark Benefit and Benchmark-Equivalent Coverage § 440.385 Delivery of benchmark and benchmark-equivalent coverage through managed care entities. In implementing benchmark or...

  14. 42 CFR 440.385 - Delivery of benchmark and benchmark-equivalent coverage through managed care entities.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 4 2011-10-01 2011-10-01 false Delivery of benchmark and benchmark-equivalent...: GENERAL PROVISIONS Benchmark Benefit and Benchmark-Equivalent Coverage § 440.385 Delivery of benchmark and benchmark-equivalent coverage through managed care entities. In implementing benchmark or...

  15. 42 CFR 440.385 - Delivery of benchmark and benchmark-equivalent coverage through managed care entities.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 4 2014-10-01 2014-10-01 false Delivery of benchmark and benchmark-equivalent...: GENERAL PROVISIONS Benchmark Benefit and Benchmark-Equivalent Coverage § 440.385 Delivery of benchmark and benchmark-equivalent coverage through managed care entities. In implementing benchmark or...

  16. The Impact Hydrocode Benchmark and Validation Project

    NASA Astrophysics Data System (ADS)

    Pierazzo, E.; Artemieva, N.; Asphaug, E.; Baldwin, E. C.; Cazamias, J.; Coker, R.; Collins, G. S.; Crawford, D. A.; Davison, T.; Elbeshausen, D.; Holsapple, K. A.; Housen, K. R.; Korycansky, D. G.; Wünnemann, K.

    When properly benchmarked and validated against observations computer models offer a powerful tool for understanding the mechanics of impact crater formation. We present results from a project to benchmark and validate shock physics codes.

  17. The complexity and challenges of the International Classification of Diseases, Ninth Revision, Clinical Modification to International Classification of Diseases, 10th Revision, Clinical Modification transition in EDs.

    PubMed

    Krive, Jacob; Patel, Mahatkumar; Gehm, Lisa; Mackey, Mark; Kulstad, Erik; Li, Jianrong John; Lussier, Yves A; Boyd, Andrew D

    2015-05-01

    Beginning October 2015, the Center for Medicare and Medicaid Services will require medical providers to use the vastly expanded International Classification of Diseases, 10th Revision, Clinical Modification (ICD-10-CM) system. Despite wide availability of information and mapping tools for the next generation of the ICD classification system, some of the challenges associated with transition from ICD-9-CM to ICD-10-CM are not well understood. To quantify the challenges faced by emergency physicians, we analyzed a subset of a 2010 Illinois Medicaid database of emergency department ICD-9-CM codes, seeking to determine the accuracy of existing mapping tools in order to better prepare emergency physicians for the change to the expanded ICD-10-CM system. We found that 27% of 1830 codes represented convoluted multidirectional mappings. We then analyzed the convoluted transitions and found that 8% of total visit encounters (23% of the convoluted transitions) were clinically incorrect. The ambiguity and inaccuracy of these mappings may impact the workflow associated with the translation process and affect the potential mapping between ICD codes and Current Procedural Codes, which determine physician reimbursement. PMID:25863652

  18. Imaging in the Age of Precision Medicine: Summary of the Proceedings of the 10th Biannual Symposium of the International Society for Strategic Studies in Radiology.

    PubMed

    Herold, Christian J; Lewin, Jonathan S; Wibmer, Andreas G; Thrall, James H; Krestin, Gabriel P; Dixon, Adrian K; Schoenberg, Stefan O; Geckle, Rena J; Muellner, Ada; Hricak, Hedvig

    2016-04-01

    During the past decade, with its breakthroughs in systems biology, precision medicine (PM) has emerged as a novel health-care paradigm. Challenging reductionism and broad-based approaches in medicine, PM is an approach for disease treatment and prevention that takes into account individual variability in genes, environment, and lifestyle. It involves integrating information from multiple sources in a holistic manner to achieve a definitive diagnosis, focused treatment, and adequate response assessment. Biomedical imaging and imaging-guided interventions, which provide multiparametric morphologic and functional information and enable focused, minimally invasive treatments, are key elements in the infrastructure needed for PM. The emerging discipline of radiogenomics, which links genotypic information to phenotypic disease manifestations at imaging, should also greatly contribute to patient-tailored care. Because of the growing volume and complexity of imaging data, decision-support algorithms will be required to help physicians apply the most essential patient data for optimal management. These innovations will challenge traditional concepts of health care and business models. Reimbursement policies and quality assurance measures will have to be reconsidered and adapted. In their 10th biannual symposium, which was held in August 2013, the members of the International Society for Strategic Studies in Radiology discussed the opportunities and challenges arising for the imaging community with the transition to PM. This article summarizes the discussions and central messages of the symposium. PMID:26465058

  19. The Royal Book by Haly Abbas from the 10th century: one of the earliest illustrations of the surgical approach to skull fractures.

    PubMed

    Aciduman, Ahmet; Arda, Berna; Kahya, Esin; Belen, Deniz

    2010-12-01

    Haly Abbas was one of the pioneering physicians and surgeons of the Eastern world in the 10th century who influenced the Western world by his monumental work, The Royal Book. The book was first partly translated into Latin by Constantinus Africanus in the 11th century without citing the author's name. Haly Abbas was recognized in Europe after full translation of The Royal Book by Stephen of Antioch in 1127. The Royal Book has been accepted as an early source of jerrah-names (surgical books) in the Eastern world. The chapters regarding cranial fractures in Haly Abbas' work include unique management strategies for his period with essential quotations from Paul of Aegina's work Epitome. Both authors preferred free bone flap craniotomy in cranial fractures. Although Paul of Aegina, a Byzantine physician and surgeon, was a connection between ancient traditions and Islamic interpretation, Haly Abbas seemed to play a bridging role between the Roman-Byzantine and the School of Salerno in Europe.

  20. Urban and rural infant-feeding practices and health in early medieval Central Europe (9th-10th Century, Czech Republic).

    PubMed

    Kaupová, Sylva; Herrscher, Estelle; Velemínský, Petr; Cabut, Sandrine; Poláček, Lumír; Brůžek, Jaroslav

    2014-12-01

    In the Central European context, the 9th and 10th centuries are well known for rapid cultural and societal changes concerning the development of the economic and political structures of states as well as the adoption of Christianity. A bioarchaeological study based on a subadult skeletal series was conducted to tackle the impact of these changes on infant and young child feeding practices and, consequently, their health in both urban and rural populations. Data on growth and frequency of nonspecific stress indicators of a subadult group aged 0-6 years were analyzed. A subsample of 41 individuals was selected for nitrogen and carbon isotope analyses, applying an intra-individual sampling strategy (bone vs. tooth). The isotopic results attest to a mosaic of food behaviors. In the urban sample, some children may have been weaned during their second year of life, while some others may have still been consuming breast milk substantially up to 4-5 years of age. By contrast, data from the rural sample show more homogeneity, with a gradual cessation of breastfeeding starting after the age of 2 years. Several factors are suggested which may have been responsible for applied weaning strategies. There is no evidence that observed weaning strategies affected the level of biological stress which the urban subadult population had to face compared with the rural subadult population. PMID:25256815

  1. Comparison of Dawn and Dusk Precipitating Electron Energy Populations Shortly After the Initial Shock for the January 10th, 1997 Magnetic Cloud

    NASA Technical Reports Server (NTRS)

    Spann, J.; Germany, G.; Swift, W.; Parks, G.; Brittnacher, M.; Elsen, R.

    1997-01-01

    The observed precipitating electron energy between 0130 UT and 0400 UT of January 10 th, 1997, indicates that there is a more energetic precipitating electron population that appears in the auroral oval at 1800-2200 UT at 030) UT. This increase in energy occurs after the initial shock of the magnetic cloud reaches the Earth (0114 UT) and after faint but dynamic polar cap precipitation has been cleared out. The more energetic population is observed to remain rather constant in MLT through the onset of auroral activity (0330 UT) and to the end of the Polar spacecraft apogee pass. Data from the Ultraviolet Imager LBH long and LBH short images are used to quantify the average energy of the precipitating auroral electrons. The Wind spacecraft located about 100 RE upstream monitored the IMF and plasma parameters during the passing of the cloud. The affects of oblique angle viewing are included in the analysis. Suggestions as to the source of this hot electron population will be presented.

  2. An INTEGRAL view of the high-energy sky (the first 10 years) - 9th INTEGRAL Workshop and celebration of the 10th anniversary of the launch

    NASA Astrophysics Data System (ADS)

    The 9th INTEGRAL workshop "An INTEGRAL view of the high-energy sky (the first 10 years)" took place from 15 to 19 October 2012 in Paris, Bibliothèque Nationale de France (Bibliothèque François Mitterrand). The workshop was sponsored by ESA, CNES and other French and European Institutions. During this week, and in particular on 17 October 2012, we celebrated the 10th anniversary of the launch of the INTEGRAL mission. The main goal of this workshop was to present and to discuss (via invited and contributed talks and posters) latest results obtained in the field of high-energy astrophysics using the International Gamma-Ray Astrophysics Laboratory INTEGRAL, as well as results from observations from other ground- and space-based high-energy observatories and from associated multi-wavelength campaigns. Contributions to the workshop covered the following scientific topics: - X-ray binaries (IGR sources, black holes, neutron stars, white dwarfs) - Isolated neutron stars (gamma-ray pulsars, magnetars) - Nucleo-synthesis (SNe, Novae, SNRs, ISM) and gamma-ray lines (511 keV) - Galactic diffuse continuum emission (including Galactic Ridge) - Massive black holes in AGNs, elliptical galaxies, nucleus of the Galaxy - Sky surveys, source populations and unidentified gamma-ray sources - Cosmic background radiation - Gamma-ray bursts - Coordinated observations with other ground- and space-based observatories - Science data processing and analysis (posters only) - Future instruments and missions (posters only)

  3. Evaluation of elemental status of ancient human bone samples from Northeastern Hungary dated to the 10th century AD by XRF

    NASA Astrophysics Data System (ADS)

    János, I.; Szathmáry, L.; Nádas, E.; Béni, A.; Dinya, Z.; Máthé, E.

    2011-11-01

    The present study is a multielemental analysis of bone samples belonging to skeletal individuals originating from two contemporaneous (10th century AD) cemeteries (Tiszavasvári Nagy-Gyepáros and Nagycserkesz-Nádasibokor sites) in Northeastern Hungary, using the XRF analytical technique. Emitted X-rays were detected in order to determine the elemental composition of bones and to appreciate the possible influence of the burial environment on the elemental content of the human skeletal remains. Lumbar vertebral bodies were used for analysis. Applying the ED(P)XRF technique concentration of the following elements were determined: P, Ca, K, Na, Mg, Al, Cl, Mn, Fe, Zn, Br and Sr. The results indicated post mortem mineral exchange between the burial environment (soil) and bones (e.g. the enhanced levels of Fe and Mn) and referred to diagenetic alteration processes during burials. However, other elements such as Zn, Sr and Br seemed to be accumulated during the past life. On the basis of statistical analysis, clear separation could not be observed between the two excavation sites in their bone elemental concentrations which denoted similar diagenetic influences, environmental conditions. The enhanced levels of Sr might be connected with the past dietary habits, especially consumption of plant food.

  4. Benchmarking clinical photography services in the NHS.

    PubMed

    Arbon, Giles

    2015-01-01

    Benchmarking is used in services across the National Health Service (NHS) using various benchmarking programs. Clinical photography services do not have a program in place and services have to rely on ad hoc surveys of other services. A trial benchmarking exercise was undertaken with 13 services in NHS Trusts. This highlights valuable data and comparisons that can be used to benchmark and improve services throughout the profession. PMID:26828540

  5. Testing (Validating?) Cross Sections with ICSBEP Benchmarks

    SciTech Connect

    Kahler, Albert C. III

    2012-06-28

    We discuss how to use critical benchmarks from the International Handbook of Evaluated Criticality Safety Benchmark Experiments to determine the applicability of specific cross sections to the end-user's problem of interest. Particular attention is paid to making sure the selected suite of benchmarks includes the user's range of applicability (ROA).

  6. How Benchmarking and Higher Education Came Together

    ERIC Educational Resources Information Center

    Levy, Gary D.; Ronco, Sharron L.

    2012-01-01

    This chapter introduces the concept of benchmarking and how higher education institutions began to use benchmarking for a variety of purposes. Here, benchmarking is defined as a strategic and structured approach whereby an organization compares aspects of its processes and/or outcomes to those of another organization or set of organizations to…

  7. Benchmark Analysis of Career and Technical Education in Lenawee County. Final Report.

    ERIC Educational Resources Information Center

    Hollenbeck, Kevin

    The career and technical education (CTE) provided in grades K-12 in the county's vocational-technical center and 12 local public school districts of Lenawee County, Michigan, was benchmarked with respect to its attention to career development. Data were collected from the following sources: structured interviews with a number of key respondents…

  8. Benchmarking neuromorphic systems with Nengo

    PubMed Central

    Bekolay, Trevor; Stewart, Terrence C.; Eliasmith, Chris

    2015-01-01

    Nengo is a software package for designing and simulating large-scale neural models. Nengo is architected such that the same Nengo model can be simulated on any of several Nengo backends with few to no modifications. Backends translate a model to specific platforms, which include GPUs and neuromorphic hardware. Nengo also contains a large test suite that can be run with any backend and focuses primarily on functional performance. We propose that Nengo's large test suite can be used to benchmark neuromorphic hardware's functional performance and simulation speed in an efficient, unbiased, and future-proof manner. We implement four benchmark models and show that Nengo can collect metrics across five different backends that identify situations in which some backends perform more accurately or quickly. PMID:26539076

  9. Geothermal Heat Pump Benchmarking Report

    SciTech Connect

    1997-01-17

    A benchmarking study was conducted on behalf of the Department of Energy to determine the critical factors in successful utility geothermal heat pump programs. A Successful program is one that has achieved significant market penetration. Successfully marketing geothermal heat pumps has presented some major challenges to the utility industry. However, select utilities have developed programs that generate significant GHP sales. This benchmarking study concludes that there are three factors critical to the success of utility GHP marking programs: (1) Top management marketing commitment; (2) An understanding of the fundamentals of marketing and business development; and (3) An aggressive competitive posture. To generate significant GHP sales, competitive market forces must by used. However, because utilities have functioned only in a regulated arena, these companies and their leaders are unschooled in competitive business practices. Therefore, a lack of experience coupled with an intrinsically non-competitive culture yields an industry environment that impedes the generation of significant GHP sales in many, but not all, utilities.

  10. No free lunch and benchmarks.

    PubMed

    Duéñez-Guzmán, Edgar A; Vose, Michael D

    2013-01-01

    We extend previous results concerning black box search algorithms, presenting new theoretical tools related to no free lunch (NFL) where functions are restricted to some benchmark (that need not be permutation closed), algorithms are restricted to some collection (that need not be permutation closed) or limited to some number of steps, or the performance measure is given. Minimax distinctions are considered from a geometric perspective, and basic results on performance matching are also presented.

  11. Restaurant Energy Use Benchmarking Guideline

    SciTech Connect

    Hedrick, R.; Smith, V.; Field, K.

    2011-07-01

    A significant operational challenge for food service operators is defining energy use benchmark metrics to compare against the performance of individual stores. Without metrics, multiunit operators and managers have difficulty identifying which stores in their portfolios require extra attention to bring their energy performance in line with expectations. This report presents a method whereby multiunit operators may use their own utility data to create suitable metrics for evaluating their operations.

  12. MPI Multicore Torus Communication Benchmark

    2008-02-05

    The MPI Multicore Torus Communications Benchmark (TorusTest) measues the aggegate bandwidth across all six links from/to any multicore node in a logical torus. It can run in wo modi: using a static or a random mapping of tasks to torus locations. The former can be used to achieve optimal mappings and aggregate bandwidths that can be achieved with varying node mappings.

  13. RISKIND verification and benchmark comparisons

    SciTech Connect

    Biwer, B.M.; Arnish, J.J.; Chen, S.Y.; Kamboj, S.

    1997-08-01

    This report presents verification calculations and benchmark comparisons for RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the population from exposures associated with the transportation of spent nuclear fuel and other radioactive materials. Spreadsheet calculations were performed to verify the proper operation of the major options and calculational steps in RISKIND. The program is unique in that it combines a variety of well-established models into a comprehensive treatment for assessing risks from the transportation of radioactive materials. Benchmark comparisons with other validated codes that incorporate similar models were also performed. For instance, the external gamma and neutron dose rate curves for a shipping package estimated by RISKIND were compared with those estimated by using the RADTRAN 4 code and NUREG-0170 methodology. Atmospheric dispersion of released material and dose estimates from the GENII and CAP88-PC codes. Verification results have shown the program to be performing its intended function correctly. The benchmark results indicate that the predictions made by RISKIND are within acceptable limits when compared with predictions from similar existing models.

  14. HS06 Benchmark for an ARM Server

    NASA Astrophysics Data System (ADS)

    Kluth, Stefan

    2014-06-01

    We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.

  15. A Study Comparing the Differences in the Levels of Achievement of Tenth Grade Students in One and Two Parent Homes.

    ERIC Educational Resources Information Center

    Kraig, Glen M.

    This study sought to determine if significant differences exist between the degree of academic achievement of 10th grade students who currently reside in one-parent/guardian homes as compared to those who reside in two-parent/guardian homes when students are grouped by sex, total family income, and ethnicity. Academic success was determined by the…

  16. Fetal Sex Determination using Non-Invasive Method of Cell-free Fetal DNA in Maternal Plasma of Pregnant Women During 6(th)- 10(th) Weeks of Gestation.

    PubMed

    Zargari, Maryam; Sadeghi, Mohammad Reza; Shahhosseiny, Mohammad Hassan; Kamali, Koroush; Saliminejad, Kyomars; Esmaeilzadeh, Ali; Khorshid, Hamid Reza Khorram

    2011-10-01

    In previous years, identification of fetal cells in maternal blood circulation has caused a new revolution in non-invasive method of prenatal diagnosis. Low number of fetal cells in maternal blood and long-term survival after pregnancy limited the use of fetal cells in diagnostic and clinical applications. With the discovery of cell-free fetal DNA (cffDNA) in plasma of pregnant women, access to genetic material of the fetus had become possible to determine early gender of a fetus in pregnancies at the risk of X-linked genetic conditions instead of applying invasive methods. Therefore in this study, the probability of detecting sequences on the Y chromosome in pregnant women has been evaluated to identify the gender of fetuses. Peripheral blood samples were obtained from 80 pregnant women at 6(th) to 10(th) weeks of gestation and then the fetal DNA was extracted from the plasma. Nested PCR was applied to detect the sequences of single copy SRY gene and multi copy DYS14 & DAZ genes on the Y chromosome of the male fetuses. At the end, all the obtained results were compared with the actual gender of the newborns. In 40 out of 42 born baby boys, the relevant gene sequences were identified and 95.2% sensitivity was obtained. Non-invasive early determination of fetal gender using cffDNA could be employed as a pre-test in the shortest possible time and with a high reliability to avoid applying invasive methods in cases where a fetus is at the risk of genetic diseases.

  17. IBC’s 23rd Annual Antibody Engineering, 10th Annual Antibody Therapeutics International Conferences and the 2012 Annual Meeting of The Antibody Society

    PubMed Central

    Klöhn, Peter-Christian; Wuellner, Ulrich; Zizlsperger, Nora; Zhou, Yu; Tavares, Daniel; Berger, Sven; Zettlitz, Kirstin A.; Proetzel, Gabriele; Yong, May; Begent, Richard H.J.; Reichert, Janice M

    2013-01-01

    The 23rd Annual Antibody Engineering, 10th Annual Antibody Therapeutics international conferences, and the 2012 Annual Meeting of The Antibody Society, organized by IBC Life Sciences with contributions from The Antibody Society and two Scientific Advisory Boards, were held December 3–6, 2012 in San Diego, CA. The meeting drew over 800 participants who attended sessions on a wide variety of topics relevant to antibody research and development. As a prelude to the main events, a pre-conference workshop held on December 2, 2012 focused on intellectual property issues that impact antibody engineering. The Antibody Engineering Conference was composed of six sessions held December 3–5, 2012: (1) From Receptor Biology to Therapy; (2) Antibodies in a Complex Environment; (3) Antibody Targeted CNS Therapy: Beyond the Blood Brain Barrier; (4) Deep Sequencing in B Cell Biology and Antibody Libraries; (5) Systems Medicine in the Development of Antibody Therapies/Systematic Validation of Novel Antibody Targets; and (6) Antibody Activity and Animal Models. The Antibody Therapeutics conference comprised four sessions held December 4–5, 2012: (1) Clinical and Preclinical Updates of Antibody-Drug Conjugates; (2) Multifunctional Antibodies and Antibody Combinations: Clinical Focus; (3) Development Status of Immunomodulatory Therapeutic Antibodies; and (4) Modulating the Half-Life of Antibody Therapeutics. The Antibody Society’s special session on applications for recording and sharing data based on GIATE was held on December 5, 2012, and the conferences concluded with two combined sessions on December 5–6, 2012: (1) Development Status of Early Stage Therapeutic Antibodies; and (2) Immunomodulatory Antibodies for Cancer Therapy. PMID:23575266

  18. An introduction to benchmarking in healthcare.

    PubMed

    Benson, H R

    1994-01-01

    Benchmarking--the process of establishing a standard of excellence and comparing a business function or activity, a product, or an enterprise as a whole with that standard--will be used increasingly by healthcare institutions to reduce expenses and simultaneously improve product and service quality. As a component of total quality management, benchmarking is a continuous process by which an organization can measure and compare its own processes with those of organizations that are leaders in a particular area. Benchmarking should be viewed as a part of quality management programs, not as a replacement. There are four kinds of benchmarking: internal, competitive, functional and generic. With internal benchmarking, functions within an organization are compared with each other. Competitive benchmarking partners do business in the same market and provide a direct comparison of products or services. Functional and generic benchmarking are performed with organizations which may have a specific similar function, such as payroll or purchasing, but which otherwise are in a different business. Benchmarking must be a team process because the outcome will involve changing current practices, with effects felt throughout the organization. The team should include members who have subject knowledge; communications and computer proficiency; skills as facilitators and outside contacts; and sponsorship of senior management. Benchmarking requires quantitative measurement of the subject. The process or activity that you are attempting to benchmark will determine the types of measurements used. Benchmarking metrics usually can be classified in one of four categories: productivity, quality, time and cost-related.

  19. The Internet Time Lag: Anticipating the Long-Term Consequences of the Information Revolution. A Report of the Annual Aspen Institute Roundtable on Information Technology (10th, Aspen, Colorado, August 2-5, 2001).

    ERIC Educational Resources Information Center

    Schwartz, Evan I.

    This is a report of the 10th annual Aspen Institute Roundtable on Information Technology (Aspen, Colorado, August 2-5, 2001). Participants were also polled after the events of September 11, and these comments have been integrated into the report. The mission of this report is to take a wide-ranging look at the trends that are defining the next new…

  20. Standing adult human phantoms based on 10th, 50th and 90th mass and height percentiles of male and female Caucasian populations

    NASA Astrophysics Data System (ADS)

    Cassola, V. F.; Milian, F. M.; Kramer, R.; de Oliveira Lira, C. A. B.; Khoury, H. J.

    2011-07-01

    Computational anthropomorphic human phantoms are useful tools developed for the calculation of absorbed or equivalent dose to radiosensitive organs and tissues of the human body. The problem is, however, that, strictly speaking, the results can be applied only to a person who has the same anatomy as the phantom, while for a person with different body mass and/or standing height the data could be wrong. In order to improve this situation for many areas in radiological protection, this study developed 18 anthropometric standing adult human phantoms, nine models per gender, as a function of the 10th, 50th and 90th mass and height percentiles of Caucasian populations. The anthropometric target parameters for body mass, standing height and other body measures were extracted from PeopleSize, a well-known software package used in the area of ergonomics. The phantoms were developed based on the assumption of a constant body-mass index for a given mass percentile and for different heights. For a given height, increase or decrease of body mass was considered to reflect mainly the change of subcutaneous adipose tissue mass, i.e. that organ masses were not changed. Organ mass scaling as a function of height was based on information extracted from autopsy data. The methods used here were compared with those used in other studies, anatomically as well as dosimetrically. For external exposure, the results show that equivalent dose decreases with increasing body mass for organs and tissues located below the subcutaneous adipose tissue layer, such as liver, colon, stomach, etc, while for organs located at the surface, such as breasts, testes and skin, the equivalent dose increases or remains constant with increasing body mass due to weak attenuation and more scatter radiation caused by the increasing adipose tissue mass. Changes of standing height have little influence on the equivalent dose to organs and tissues from external exposure. Specific absorbed fractions (SAFs) have also

  1. New archeointensity data from French Early Medieval pottery production (6th-10th century AD). Tracing 1500 years of geomagnetic field intensity variations in Western Europe

    NASA Astrophysics Data System (ADS)

    Genevey, Agnès; Gallet, Yves; Jesset, Sébastien; Thébault, Erwan; Bouillon, Jérôme; Lefèvre, Annie; Le Goff, Maxime

    2016-08-01

    Nineteen new archeointensity results were obtained from the analysis of groups of French pottery fragments dated to the Early Middle Ages (6th to 10th centuries AD). They are from several medieval ceramic production sites, excavated mainly in Saran (Central France), and their precise dating was established based on typo-chronological characteristics. Intensity measurements were performed using the Triaxe protocol, which takes into account the effects on the intensity determinations of both thermoremanent magnetization anisotropy and cooling rate. Intensity analyses were also carried out on modern pottery produced at Saran during an experimental firing. The results show very good agreement with the geomagnetic field intensity directly measured inside and around the kiln, thus reasserting the reliability of the Triaxe protocol and the relevance of the quality criteria used. They further demonstrate the potential of the Saran pottery production for archeomagnetism. The new archeointensity results allow a precise and coherent description of the geomagnetic field intensity variations in Western Europe during the Early Medieval period, which was until now poorly documented. They show a significant increase in intensity during the 6th century AD, high intensity values from the 7th to the 9th century, with a minimum of small amplitude at the transition between the 7th and the 8th centuries and finally an important decrease until the beginning of the 11th century. Together with published intensity results available within a radius of 700 km around Paris, the new data were used to compute a master curve of the Western European geomagnetic intensity variations over the past 1500 years. This curve clearly exhibits five intensity maxima: at the transition between the 6th and 7th century AD, at the middle of the 9th century, during the 12th century, in the second part of the 14th century and at the very beginning of the 17th century AD. Some of these peaks are smoothed, or

  2. IBC’s 23rd Antibody Engineering and 10th Antibody Therapeutics Conferences and the Annual Meeting of The Antibody Society

    PubMed Central

    Marquardt, John; Begent, Richard H.J.; Chester, Kerry; Huston, James S.; Bradbury, Andrew; Scott, Jamie K.; Thorpe, Philip E.; Veldman, Trudi; Reichert, Janice M.; Weiner, Louis M.

    2012-01-01

    Now in its 23rd and 10th years, respectively, the Antibody Engineering and Antibody Therapeutics conferences are the Annual Meeting of The Antibody Society. The scientific program covers the full spectrum of challenges in antibody research and development from basic science through clinical development. In this preview of the conferences, the chairs provide their thoughts on sessions that will allow participants to track emerging trends in (1) the development of next-generation immunomodulatory antibodies; (2) the complexity of the environment in which antibodies must function; (3) antibody-targeted central nervous system (CNS) therapies that cross the blood brain barrier; (4) the extension of antibody half-life for improved efficacy and pharmacokinetics (PK)/pharmacodynamics (PD); and (5) the application of next generation DNA sequencing to accelerate antibody research. A pre-conference workshop on Sunday, December 2, 2012 will update participants on recent intellectual property (IP) law changes that affect antibody research, including biosimilar legislation, the America Invents Act and recent court cases. Keynote presentations will be given by Andreas Plückthun (University of Zürich), who will speak on engineering receptor ligands with powerful cellular responses; Gregory Friberg (Amgen Inc.), who will provide clinical updates of bispecific antibodies; James D. Marks (University of California, San Francisco), who will discuss a systems approach to generating tumor targeting antibodies; Dario Neri (Swiss Federal Institute of Technology Zürich), who will speak about delivering immune modulators at the sites of disease; William M. Pardridge (University of California, Los Angeles), who will discuss delivery across the blood-brain barrier; and Peter Senter (Seattle Genetics, Inc.), who will present his vision for the future of antibody-drug conjugates. For more information on these meetings or to register to attend, please visit www

  3. IBC's 23rd Antibody Engineering and 10th Antibody Therapeutics Conferences and the Annual Meeting of The Antibody Society: December 2-6, 2012, San Diego, CA.

    PubMed

    Marquardt, John; Begent, Richard H J; Chester, Kerry; Huston, James S; Bradbury, Andrew; Scott, Jamie K; Thorpe, Philip E; Veldman, Trudi; Reichert, Janice M; Weiner, Louis M

    2012-01-01

    Now in its 23rd and 10th years, respectively, the Antibody Engineering and Antibody Therapeutics conferences are the Annual Meeting of The Antibody Society. The scientific program covers the full spectrum of challenges in antibody research and development from basic science through clinical development. In this preview of the conferences, the chairs provide their thoughts on sessions that will allow participants to track emerging trends in (1) the development of next-generation immunomodulatory antibodies; (2) the complexity of the environment in which antibodies must function; (3) antibody-targeted central nervous system (CNS) therapies that cross the blood brain barrier; (4) the extension of antibody half-life for improved efficacy and pharmacokinetics (PK)/pharmacodynamics (PD); and (5) the application of next generation DNA sequencing to accelerate antibody research. A pre-conference workshop on Sunday, December 2, 2012 will update participants on recent intellectual property (IP) law changes that affect antibody research, including biosimilar legislation, the America Invents Act and recent court cases. Keynote presentations will be given by Andreas Plückthun (University of Zürich), who will speak on engineering receptor ligands with powerful cellular responses; Gregory Friberg (Amgen Inc.), who will provide clinical updates of bispecific antibodies; James D. Marks (University of California, San Francisco), who will discuss a systems approach to generating tumor targeting antibodies; Dario Neri (Swiss Federal Institute of Technology Zürich), who will speak about delivering immune modulators at the sites of disease; William M. Pardridge (University of California, Los Angeles), who will discuss delivery across the blood-brain barrier; and Peter Senter (Seattle Genetics, Inc.), who will present his vision for the future of antibody-drug conjugates. For more information on these meetings or to register to attend, please visit www

  4. Validity of the International Classification of Diseases 10th revision code for hyperkalaemia in elderly patients at presentation to an emergency department and at hospital admission

    PubMed Central

    Fleet, Jamie L; Shariff, Salimah Z; Gandhi, Sonja; Weir, Matthew A; Jain, Arsh K; Garg, Amit X

    2012-01-01

    Objectives Evaluate the validity of the International Classification of Diseases, 10th revision (ICD-10) code for hyperkalaemia (E87.5) in two settings: at presentation to an emergency department and at hospital admission. Design Population-based validation study. Setting 12 hospitals in Southwestern Ontario, Canada, from 2003 to 2010. Participants Elderly patients with serum potassium values at presentation to an emergency department (n=64 579) and at hospital admission (n=64 497). Primary outcome Sensitivity, specificity, positive-predictive value and negative-predictive value. Serum potassium values in patients with and without a hyperkalaemia code (code positive and code negative, respectively). Results The sensitivity of the best-performing ICD-10 coding algorithm for hyperkalaemia (defined by serum potassium >5.5 mmol/l) was 14.1% (95% CI 12.5% to 15.9%) at presentation to an emergency department and 14.6% (95% CI 13.3% to 16.1%) at hospital admission. Both specificities were greater than 99%. In the two settings, the positive-predictive values were 83.2% (95% CI 78.4% to 87.1%) and 62.0% (95% CI 57.9% to 66.0%), while the negative-predictive values were 97.8% (95% CI 97.6% to 97.9%) and 96.9% (95% CI 96.8% to 97.1%). In patients who were code positive for hyperkalaemia, median (IQR) serum potassium values were 6.1 (5.7 to 6.8) mmol/l at presentation to an emergency department and 6.0 (5.1 to 6.7) mmol/l at hospital admission. For code-negative patients median (IQR) serum potassium values were 4.0 (3.7 to 4.4) mmol/l and 4.1 (3.8 to 4.5) mmol/l in each of the two settings, respectively. Conclusions Patients with hospital encounters who were ICD-10 E87.5 hyperkalaemia code positive and negative had distinct higher and lower serum potassium values, respectively. However, due to very low sensitivity, the incidence of hyperkalaemia is underestimated. PMID:23274674

  5. Fault modeling of the Mw 7.0 shallow intra-slab strike-slip earthquake occurred on 2011 July 10th using near-field tsunami record

    NASA Astrophysics Data System (ADS)

    Kubota, T.; Hino, R.; Iinuma, T.

    2014-12-01

    On 2011 July 10th, an earthquake of Mw 7.0 occurred in the shallow part of the Pacific slab beneath the large coseismic slip area of the 2011 Tohoku-Oki earthquake. This event has a strike-slip focal mechanism with steep dipping nodal planes. Near the epicenter, aftershocks determined by OBS deployment formed clear two orthogonal lineaments with identical strikes of the focal mechanism solution, suggesting that the aftershock activity occurred along the two conjugate faults. The strikes of these faults were almost parallel to the direction of the magnetic lineations and the fracture zones of the incoming Pacific plate, suggesting that the earthquake was the re-rupture of congenital fractures under the extensional stress induced by the Tohoku-Oki earthquake. It is of great interest to know the down-dip size of the source fault not only to understand the mechanical nature of the slab but also the post-2011 stress state. Coseismic seafloor deformation and tsunami associated with the earthquake were observed by ocean bottom pressure gauges deployed within ~ 100 km from the epicenter. We estimated the finite fault model of this event to discuss the rupture properties of the earthquake. We sought the source model assuming a rectangular fault with a uniform slip assuming the strike of the fault to be one of those of two nodal planes of the focal mechanism. The two preferable source models corresponding to the two nodal planes explained the observed data equally well. For either model, the depth of the downdip end exceeds 40 km below the plate boundary, meaning the fault widths (down-dip size) were much larger than the depth extent of the aftershock distribution (~ 15 km). We sought another source model assuming the simultaneous rupture of the conjugate faults and found that the width of the fault model was more consistent with the aftershock distribution than the single rupture plane models. The 2011 intraslab strike-slip earthquake might be a compound rupture of the

  6. NASA Software Engineering Benchmarking Effort

    NASA Technical Reports Server (NTRS)

    Godfrey, Sally; Rarick, Heather

    2012-01-01

    Benchmarking was very interesting and provided a wealth of information (1) We did see potential solutions to some of our "top 10" issues (2) We have an assessment of where NASA stands with relation to other aerospace/defense groups We formed new contacts and potential collaborations (1) Several organizations sent us examples of their templates, processes (2) Many of the organizations were interested in future collaboration: sharing of training, metrics, Capability Maturity Model Integration (CMMI) appraisers, instructors, etc. We received feedback from some of our contractors/ partners (1) Desires to participate in our training; provide feedback on procedures (2) Welcomed opportunity to provide feedback on working with NASA

  7. Benchmarking for the competitive marketplace.

    PubMed

    Clarke, R W; Sucher, T O

    1999-07-01

    One would get little argument these days regarding the importance of performance measurement in the health care industry. The traditional approach has been the straightforward use of measurable units such as financial comparisons and clinical indicators (e.g., length of stay). Also we in the health care industry have traditionally benchmarked our performance and strategies against those most like ourselves. Today's competitive market demands a more customer-focused set of performance measures that go beyond traditional approaches such as customer service. The most important task in today's environment is to study the customers' emerging priorities and adjust our business to meet those priorities. PMID:11184882

  8. Benchmarking thermal neutron scattering in graphite

    NASA Astrophysics Data System (ADS)

    Zhou, Tong

    A Slowing-Down-Time experiment was designed and performed at the Oak Ridge National Laboratory (ORNL) by using the Oak Ridge Electron Linear Accelerator (ORELA) as a neutron source to study the neutron thermalization in graphite at room and higher temperatures. The MCNP5 code was utilized to simulate the detector responses and help optimize the experimental design including the size of the graphite assembly, furnace, shielding system and detector position. To facilitate such analysis, MCNP5 version 1.30 was modified to enable perturbation calculation using point detector type tallies. By using the modified MCNP5 code, the sensitivity of the experimental models to the graphite total thermal neutron cross-sections was studied to optimize the design of the experiment. Measurements of slowing-down-time spectrum in graphite were performed at room temperature for a 70x70x70 cm graphite pile by using a Li-6 scintillator and a U-235 fission counter at different locations. The measurements were directly compared to Monte Carlo simulations that use different graphite thermal neutron scattering cross-section libraries. Simulations based on the ENDF/B-VI graphite library were found to have a 30%-40% disagreement with the measurements. In addition to the graphite SDT experiment, which provided the data in the energy region above the graphite Bragg-cutoff energy, transmission experiments were performed for different types of graphite samples using the NIST 8.9 A beam (located at NG-6) to investigating the energy region below the Bragg-cutoff energy. Measurements confirmed that reactor grade graphite, which is a two phase material (crystalline graphite and binder (amorphous-like) carbon), has different thermal neutron scattering cross section from pyrolytic graphite (crystalline graphite). The experiments presented in this work compliment each other and provide an experimental data set which can be used to benchmark graphite thermal neutron scattering cross section libraries that

  9. Pynamic: the Python Dynamic Benchmark

    SciTech Connect

    Lee, G L; Ahn, D H; de Supinksi, B R; Gyllenhaal, J C; Miller, P J

    2007-07-10

    Python is widely used in scientific computing to facilitate application development and to support features such as computational steering. Making full use of some of Python's popular features, which improve programmer productivity, leads to applications that access extremely high numbers of dynamically linked libraries (DLLs). As a result, some important Python-based applications severely stress a system's dynamic linking and loading capabilities and also cause significant difficulties for most development environment tools, such as debuggers. Furthermore, using the Python paradigm for large scale MPI-based applications can create significant file IO and further stress tools and operating systems. In this paper, we present Pynamic, the first benchmark program to support configurable emulation of a wide-range of the DLL usage of Python-based applications for large scale systems. Pynamic has already accurately reproduced system software and tool issues encountered by important large Python-based scientific applications on our supercomputers. Pynamic provided insight for our system software and tool vendors, and our application developers, into the impact of several design decisions. As we describe the Pynamic benchmark, we will highlight some of the issues discovered in our large scale system software and tools using Pynamic.

  10. Computational Chemistry Comparison and Benchmark Database

    National Institute of Standards and Technology Data Gateway

    SRD 101 NIST Computational Chemistry Comparison and Benchmark Database (Web, free access)   The NIST Computational Chemistry Comparison and Benchmark Database is a collection of experimental and ab initio thermochemical properties for a selected set of molecules. The goals are to provide a benchmark set of molecules for the evaluation of ab initio computational methods and allow the comparison between different ab initio computational methods for the prediction of thermochemical properties.

  11. Benchmarking for Excellence and the Nursing Process

    NASA Technical Reports Server (NTRS)

    Sleboda, Claire

    1999-01-01

    Nursing is a service profession. The services provided are essential to life and welfare. Therefore, setting the benchmark for high quality care is fundamental. Exploring the definition of a benchmark value will help to determine a best practice approach. A benchmark is the descriptive statement of a desired level of performance against which quality can be judged. It must be sufficiently well understood by managers and personnel in order that it may serve as a standard against which to measure value.

  12. Method and system for benchmarking computers

    DOEpatents

    Gustafson, John L.

    1993-09-14

    A testing system and method for benchmarking computer systems. The system includes a store containing a scalable set of tasks to be performed to produce a solution in ever-increasing degrees of resolution as a larger number of the tasks are performed. A timing and control module allots to each computer a fixed benchmarking interval in which to perform the stored tasks. Means are provided for determining, after completion of the benchmarking interval, the degree of progress through the scalable set of tasks and for producing a benchmarking rating relating to the degree of progress for each computer.

  13. Benchmarks for acute stroke care delivery

    PubMed Central

    Hall, Ruth E.; Khan, Ferhana; Bayley, Mark T.; Asllani, Eriola; Lindsay, Patrice; Hill, Michael D.; O'Callaghan, Christina; Silver, Frank L.; Kapral, Moira K.

    2013-01-01

    Objective Despite widespread interest in many jurisdictions in monitoring and improving the quality of stroke care delivery, benchmarks for most stroke performance indicators have not been established. The objective of this study was to develop data-derived benchmarks for acute stroke quality indicators. Design Nine key acute stroke quality indicators were selected from the Canadian Stroke Best Practice Performance Measures Manual. Participants A population-based retrospective sample of patients discharged from 142 hospitals in Ontario, Canada, between 1 April 2008 and 31 March 2009 (N = 3191) was used to calculate hospital rates of performance and benchmarks. Intervention The Achievable Benchmark of Care (ABC™) methodology was used to create benchmarks based on the performance of the upper 15% of patients in the top-performing hospitals. Main Outcome Measures Benchmarks were calculated for rates of neuroimaging, carotid imaging, stroke unit admission, dysphasia screening and administration of stroke-related medications. Results The following benchmarks were derived: neuroimaging within 24 h, 98%; admission to a stroke unit, 77%; thrombolysis among patients arriving within 2.5 h, 59%; carotid imaging, 93%; dysphagia screening, 88%; antithrombotic therapy, 98%; anticoagulation for atrial fibrillation, 94%; antihypertensive therapy, 92% and lipid-lowering therapy, 77%. ABC™ acute stroke care benchmarks achieve or exceed the consensus-based targets required by Accreditation Canada, with the exception of dysphagia screening. Conclusions Benchmarks for nine hospital-based acute stroke care quality indicators have been established. These can be used in the development of standards for quality improvement initiatives. PMID:24141011

  14. Two-dimensional benchmark calculations for PNL-30 through PNL-35

    SciTech Connect

    Mosteller, R.D.

    1997-12-01

    Interest in critical experiments with lattices of mixed-oxide (MOX) fuel pins has been revived by the possibility that light water reactors will be used for the disposition of weapons-grade plutonium. A series of six experiments with MOX lattices, designated PNL-30 through PNL-35, was performed at Pacific Northwest Laboratories in 1975 and 1976, and a set of benchmark specifications for these experiments subsequently was adopted by the Cross Section Evaluation Working Group (CSEWG). Although there appear to be some problems with these experiments, they remain the only CSEWG benchmarks for MOX lattices.

  15. The Zoo, Benchmarks & You: How To Reach the Oregon State Benchmarks with Zoo Resources.

    ERIC Educational Resources Information Center

    2002

    This document aligns Oregon state educational benchmarks and standards with Oregon Zoo resources. Benchmark areas examined include English, mathematics, science, social studies, and career and life roles. Brief descriptions of the programs offered by the zoo are presented. (SOE)

  16. Risk factors for bulk milk somatic cell counts and total bacterial counts in smallholder dairy farms in the 10th region of Chile.

    PubMed

    van Schaik, G; Green, L E; Guzmán, D; Esparza, H; Tadich, N

    2005-01-01

    We investigated the principal management factors that influenced bulk milk somatic cell count (BMSCC) and total bacterial count (TBC) of smallholder dairy farms in the 10th region of Chile. One hundred and fifty smallholder milk producers were selected randomly from 42 milk collection centres (MCCs). In April and May of 2002, all farms were visited and a detailed interview questionnaire on dairy-cow management related to milk quality was conducted. In addition, the BMSCC and TBC results from the previous 2 months' fortnightly tests were obtained from the MCCs. The mean BMSCC and TBC were used as the dependent variables in the analyses and were normalised by a natural-logarithm transformation (LN). All independent management variables were categorised into binary outcomes and present (=1) was compared with absent (=0). Biserial correlations were calculated between the LNBMSCC or LNTBC and the management factors of the smallholder farms. Management factors with correlations with P0.05) factors. A random MCC effect was included in the models to investigate the importance of clustering of herds within MCC. In the null model for mean LNTBC, the random effect of MCCs was highly significant. It was explained by: milk collected once a day or less compared with collection twice a day, not cleaning the bucket after milking mastitic cows versus cleaning the bucket and cooling milk in a vat of water versus not cooling milk or using ice or a bulk tank to cool milk. Other factors that increased the LNTBC were a waiting yard with a soil or gravel floor versus concrete, use of plastic buckets for milking instead of metal, not feeding California mastitis test (CMT)-positive milk to calves and cows of dual-purpose breed. The final model explained 35% of the variance. The model predicted that a herd that complied with all the management practices had a mean

  17. Benchmarking: A tool to enhance performance

    SciTech Connect

    Munro, J.F.; Kristal, J.; Thompson, G.; Johnson, T.

    1996-12-31

    The Office of Environmental Management is bringing Headquarters and the Field together to implement process improvements throughout the Complex through a systematic process of organizational learning called benchmarking. Simply stated, benchmarking is a process of continuously comparing and measuring practices, processes, or methodologies with those of other private and public organizations. The EM benchmarking program, which began as the result of a recommendation from Xerox Corporation, is building trust and removing barriers to performance enhancement across the DOE organization. The EM benchmarking program is designed to be field-centered with Headquarters providing facilitatory and integrative functions on an ``as needed`` basis. One of the main goals of the program is to assist Field Offices and their associated M&O/M&I contractors develop the capabilities to do benchmarking for themselves. In this regard, a central precept is that in order to realize tangible performance benefits, program managers and staff -- the ones closest to the work - must take ownership of the studies. This avoids the ``check the box`` mentality associated with some third party studies. This workshop will provide participants with a basic level of understanding why the EM benchmarking team was developed and the nature and scope of its mission. Participants will also begin to understand the types of study levels and the particular methodology the EM benchmarking team is using to conduct studies. The EM benchmarking team will also encourage discussion on ways that DOE (both Headquarters and the Field) can team with its M&O/M&I contractors to conduct additional benchmarking studies. This ``introduction to benchmarking`` is intended to create a desire to know more and a greater appreciation of how benchmarking processes could be creatively employed to enhance performance.

  18. Mock Tribunal in Action: Mock International Criminal Tribunal for the Former Yugoslavia. 10th Grade Lesson. Schools of California Online Resources for Education (SCORE): Connecting California's Classrooms to the World.

    ERIC Educational Resources Information Center

    Fix, Terrance

    In this lesson, students role-play as members of the International Criminal Tribunal for the former Yugoslavia that will bring to trial "Persons Responsible for Serious Violations of International Humanitarian Law." Students represent the following groups: International Criminal Tribunal; Prosecution; Defense; Serbians; Croatians; Bosnian Muslims;…

  19. Kauffman Teen Survey. An Annual Report on Teen Health Behaviors: Use of Alcohol, Tobacco, and Other Drugs among 8th-, 10th-, and 12th-Grade Students in Greater Kansas City, 1991-92 to 2000-01.

    ERIC Educational Resources Information Center

    Ewing Marion Kauffman Foundation, Kansas City, MO.

    The Ewing Marion Kauffman Foundation began surveying Kansas City area teens during the 1984-85 school year. The Kauffman Teen Survey now addresses two sets of issues for teens. Teen Health Behaviors, addressed in this report, have been a focus of the survey since its inception. The report focuses on teen use of alcohol, tobacco, and other drugs in…

  20. Impacts of Parental Education on Substance Use: Differences among White, African-American, and Hispanic Students in 8th, 10th, and 12th Grades (1999-2008). Monitoring the Future Occasional Paper Series. Paper No. 70

    ERIC Educational Resources Information Center

    Bachman, Jerald G.; O'Malley, Patrick M.; Johnston, Lloyd D.; Schulenberg, John E.

    2010-01-01

    The Monitoring the Future (MTF) project reports annually on levels and trends in self-reported substance use by secondary school students (e.g., Johnston, O'Malley, Bachman, & Schulenberg, 2009). The reports include subgroup comparisons, and these have revealed substantial differences among race/ethnicity groups, as well as some differences linked…

  1. COG validation: SINBAD Benchmark Problems

    SciTech Connect

    Lent, E M; Sale, K E; Buck, R M; Descalle, M

    2004-02-23

    We validated COG, a 3D Monte Carlo radiation transport code, against experimental data and MNCP4C simulations from the Shielding Integral Benchmark Archive Database (SINBAD) compiled by RSICC. We modeled three experiments: the Osaka Nickel and Aluminum sphere experiments conducted at the OKTAVIAN facility, and the liquid oxygen experiment conducted at the FNS facility. COG results are in good agreement with experimental data and generally within a few % of MCNP results. There are several possible sources of discrepancy between MCNP and COG results: (1) the cross-section database versions are different, MCNP uses ENDFB VI 1.1 while COG uses ENDFB VIR7, (2) the code implementations are different, and (3) the models may differ slightly. We also limited the use of variance reduction methods when running the COG version of the problems.

  2. Benchmarking Asteroid-Deflection Experiment

    NASA Astrophysics Data System (ADS)

    Remington, Tane; Bruck Syal, Megan; Owen, John Michael; Miller, Paul L.

    2016-10-01

    An asteroid impacting Earth could have devastating consequences. In preparation to deflect or disrupt one before it reaches Earth, it is imperative to have modeling capabilities that adequately simulate the deflection actions. Code validation is key to ensuring full confidence in simulation results used in an asteroid-mitigation plan. We are benchmarking well-known impact experiments using Spheral, an adaptive smoothed-particle hydrodynamics code, to validate our modeling of asteroid deflection. We describe our simulation results, compare them with experimental data, and discuss what we have learned from our work. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-695540

  3. Benchmarking Multipacting Simulations in VORPAL

    SciTech Connect

    C. Nieter, C. Roark, P. Stoltz, K. Tian

    2009-05-01

    We will present the results of benchmarking simulations run to test the ability of VORPAL to model multipacting processes in Superconducting Radio Frequency structures. VORPAL is an electromagnetic (FDTD) particle-in-cell simulation code originally developed for applications in plasma and beam physics. The addition of conformal boundaries and algorithms for secondary electron emission allow VORPAL to be applied to multipacting processes. We start with simulations of multipacting between parallel plates where there are well understood theoretical predictions for the frequency bands where multipacting is expected to occur. We reproduce the predicted multipacting bands and demonstrate departures from the theoretical predictions when a more sophisticated model of secondary emission is used. Simulations of existing cavity structures developed at Jefferson National Laboratories will also be presented where we compare results from VORPAL to experimental data.

  4. NASA Software Engineering Benchmarking Study

    NASA Technical Reports Server (NTRS)

    Rarick, Heather L.; Godfrey, Sara H.; Kelly, John C.; Crumbley, Robert T.; Wifl, Joel M.

    2013-01-01

    To identify best practices for the improvement of software engineering on projects, NASA's Offices of Chief Engineer (OCE) and Safety and Mission Assurance (OSMA) formed a team led by Heather Rarick and Sally Godfrey to conduct this benchmarking study. The primary goals of the study are to identify best practices that: Improve the management and technical development of software intensive systems; Have a track record of successful deployment by aerospace industries, universities [including research and development (R&D) laboratories], and defense services, as well as NASA's own component Centers; and Identify candidate solutions for NASA's software issues. Beginning in the late fall of 2010, focus topics were chosen and interview questions were developed, based on the NASA top software challenges. Between February 2011 and November 2011, the Benchmark Team interviewed a total of 18 organizations, consisting of five NASA Centers, five industry organizations, four defense services organizations, and four university or university R and D laboratory organizations. A software assurance representative also participated in each of the interviews to focus on assurance and software safety best practices. Interviewees provided a wealth of information on each topic area that included: software policy, software acquisition, software assurance, testing, training, maintaining rigor in small projects, metrics, and use of the Capability Maturity Model Integration (CMMI) framework, as well as a number of special topics that came up in the discussions. NASA's software engineering practices compared favorably with the external organizations in most benchmark areas, but in every topic, there were ways in which NASA could improve its practices. Compared to defense services organizations and some of the industry organizations, one of NASA's notable weaknesses involved communication with contractors regarding its policies and requirements for acquired software. One of NASA's strengths

  5. HPC Analytics Support. Requirements for Uncertainty Quantification Benchmarks

    SciTech Connect

    Paulson, Patrick R.; Purohit, Sumit; Rodriguez, Luke R.

    2015-05-01

    This report outlines techniques for extending benchmark generation products so they support uncertainty quantification by benchmarked systems. We describe how uncertainty quantification requirements can be presented to candidate analytical tools supporting SPARQL. We describe benchmark data sets for evaluating uncertainty quantification, as well as an approach for using our benchmark generator to produce data sets for generating benchmark data sets.

  6. Benchmarking Learning and Teaching: Developing a Method

    ERIC Educational Resources Information Center

    Henderson-Smart, Cheryl; Winning, Tracey; Gerzina, Tania; King, Shalinie; Hyde, Sarah

    2006-01-01

    Purpose: To develop a method for benchmarking teaching and learning in response to an institutional need to validate a new program in Dentistry at the University of Sydney, Australia. Design/methodology/approach: After a collaborative partner, University of Adelaide, was identified, the areas of teaching and learning to be benchmarked, PBL…

  7. Benchmark Assessment for Improved Learning. AACC Report

    ERIC Educational Resources Information Center

    Herman, Joan L.; Osmundson, Ellen; Dietel, Ronald

    2010-01-01

    This report describes the purposes of benchmark assessments and provides recommendations for selecting and using benchmark assessments--addressing validity, alignment, reliability, fairness and bias and accessibility, instructional sensitivity, utility, and reporting issues. We also present recommendations on building capacity to support schools'…

  8. Beyond Benchmarking: Value-Adding Metrics

    ERIC Educational Resources Information Center

    Fitz-enz, Jac

    2007-01-01

    HR metrics has grown up a bit over the past two decades, moving away from simple benchmarking practices and toward a more inclusive approach to measuring institutional performance and progress. In this article, the acknowledged "father" of human capital performance benchmarking provides an overview of several aspects of today's HR metrics…

  9. A Seafloor Benchmark for 3-dimensional Geodesy

    NASA Astrophysics Data System (ADS)

    Chadwell, C. D.; Webb, S. C.; Nooner, S. L.

    2014-12-01

    We have developed an inexpensive, permanent seafloor benchmark to increase the longevity of seafloor geodetic measurements. The benchmark provides a physical tie to the sea floor lasting for decades (perhaps longer) on which geodetic sensors can be repeatedly placed and removed with millimeter resolution. Global coordinates estimated with seafloor geodetic techniques will remain attached to the benchmark allowing for the interchange of sensors as they fail or become obsolete, or for the sensors to be removed and used elsewhere, all the while maintaining a coherent series of positions referenced to the benchmark. The benchmark has been designed to free fall from the sea surface with transponders attached. The transponder can be recalled via an acoustic command sent from the surface to release from the benchmark and freely float to the sea surface for recovery. The duration of the sensor attachment to the benchmark will last from a few days to a few years depending on the specific needs of the experiment. The recovered sensors are then available to be reused at other locations, or again at the same site in the future. Three pins on the sensor frame mate precisely and unambiguously with three grooves on the benchmark. To reoccupy a benchmark a Remotely Operated Vehicle (ROV) uses its manipulator arm to place the sensor pins into the benchmark grooves. In June 2014 we deployed four benchmarks offshore central Oregon. We used the ROV Jason to successfully demonstrate the removal and replacement of packages onto the benchmark. We will show the benchmark design and its operational capabilities. Presently models of megathrust slip within the Cascadia Subduction Zone (CSZ) are mostly constrained by the sub-aerial GPS vectors from the Plate Boundary Observatory, a part of Earthscope. More long-lived seafloor geodetic measures are needed to better understand the earthquake and tsunami risk associated with a large rupture of the thrust fault within the Cascadia subduction zone

  10. Benchmark 1 - Nonlinear strain path forming limit of a reverse draw: Part A: Benchmark description

    NASA Astrophysics Data System (ADS)

    Benchmark-1 Committee

    2013-12-01

    The objective of this benchmark is to demonstrate the predictability of forming limits under nonlinear strain paths for a draw panel with a non-axisymmetric reversed dome-shape at the center. It is important to recognize that treating strain forming limits as though they were static during the deformation process may not lead to successful predictions of this benchmark, due to the nonlinearity of the strain paths involved in this benchmark. The benchmark tool is designed to enable a two-stage draw/reverse draw continuous forming process. Three typical sheet materials, AA5182-O Aluminum, and DP600 and TRIP780 Steels, are selected for this benchmark study.

  11. Improving Grading Consistency through Grade Lift Reporting

    ERIC Educational Resources Information Center

    Millet, Ido

    2010-01-01

    We define Grade Lift as the difference between average class grade and average cumulative class GPA. This metric provides an assessment of how lenient the grading was for a given course. In 2006, we started providing faculty members individualized Grade Lift reports reflecting their position relative to an anonymously plotted school-wide…

  12. easyCBM Beginning Reading Measures: Grades K-1 Alternate Form Reliability and Criterion Validity with the SAT-10. Technical Report #1403

    ERIC Educational Resources Information Center

    Wray, Kraig; Lai, Cheng-Fei; Sáez, Leilani; Alonzo, Julie; Tindal, Gerald

    2013-01-01

    We report the results of an alternate form reliability and criterion validity study of kindergarten and grade 1 (N = 84-199) reading measures from the easyCBM© assessment system and Stanford Early School Achievement Test/Stanford Achievement Test, 10th edition (SESAT/SAT-­10) across 5 time points. The alternate form reliabilities ranged from…

  13. The Turn of the Century. Tenth Grade Lesson. Schools of California Online Resources for Education (SCORE): Connecting California's Classrooms to the World.

    ERIC Educational Resources Information Center

    Bartels, Dede

    In this 10th grade social studies and language arts interdisciplinary unit, students research and report on historical figures from the turn of the 20th century. Students are required to work in pairs to learn about famous and common individuals, including Andrew Carnegie, Samuel Gompers, Susan B. Anthony, Thomas Edison, Theodore Roosevelt, Booker…

  14. Energy in the Global Marketplace. Grades 9, 10, 11. Interdisciplinary Student/Teacher Materials in Energy, the Environment, and the Economy.

    ERIC Educational Resources Information Center

    National Science Teachers Association, Washington, DC.

    This instructional unit contains six classroom lessons in which 9th, 10th, or 11th grade social studies students examine the effects of competition among nations and world regions as demand for oil outstrips supply. The overall objective is to help students understand the concept that energy is a commodity to be bought and sold like any other…

  15. VENUS-2 MOX Core Benchmark: Results of ORNL Calculations Using HELIOS-1.4

    SciTech Connect

    Ellis, RJ

    2001-02-02

    The Task Force on Reactor-Based Plutonium Disposition, now an Expert Group, was set up through the Organization for Economic Cooperation and Development/Nuclear Energy Agency to facilitate technical assessments of burning weapons-grade plutonium mixed-oxide (MOX) fuel in U.S. pressurized-water reactors and Russian VVER nuclear reactors. More than ten countries participated to advance the work of the Task Force in a major initiative, which was a blind benchmark study to compare code benchmark calculations against experimental data for the VENUS-2 MOX core at SCK-CEN in Mol, Belgium. At the Oak Ridge National Laboratory, the HELIOS-1.4 code was used to perform a comprehensive study of pin-cell and core calculations for the VENUS-2 benchmark.

  16. Language Arts Curriculum Framework: Sample Curriculum Model, Grade 4.

    ERIC Educational Resources Information Center

    Arkansas State Dept. of Education, Little Rock.

    Based on the 1998 Arkansas State Language Arts Framework, this sample curriculum model for grade four language arts is divided into sections focusing on writing; listening, speaking, and viewing; and reading. Each section lists standards; benchmarks; assessments; and strategies/activities. The reading section itself is divided into print…

  17. Language Arts Curriculum Framework: Sample Curriculum Model, Grade 3.

    ERIC Educational Resources Information Center

    Arkansas State Dept. of Education, Little Rock.

    Based on the 1998 Arkansas State Language Arts Framework, this sample curriculum model for grade three language arts is divided into sections focusing on writing; listening, speaking, and viewing; and reading. Each section lists standards; benchmarks; assessments; and strategies/activities. The reading section itself is divided into print…

  18. Implementing Guided Reading Strategies with Kindergarten and First Grade Students

    ERIC Educational Resources Information Center

    Abbott, Lindsey; Dornbush, Abby; Giddings, Anne; Thomas, Jennifer

    2012-01-01

    In the action research project report, the teacher researchers found that many kindergarten and first-grade students did not have the reading readiness skills to be reading at their benchmark target. The purpose of the project was to improve the students overall reading ability. The dates of the project began on September 8 through December 20,…

  19. Grade Retention: Elementary Teacher Perceptions for Students with and without Disabilities

    ERIC Educational Resources Information Center

    Renaud, Gia

    2010-01-01

    In this era of education accountability, teachers are looking closely at grade level requirements and assessment of student performance. Grade retention is being considered for both students with and without disabilities if they are not meeting end of the year achievement benchmarks. Although research has shown that retention is not the best…

  20. Metrics and Benchmarks for Visualization

    NASA Technical Reports Server (NTRS)

    Uselton, Samuel P.; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    What is a "good" visualization? How can the quality of a visualization be measured? How can one tell whether one visualization is "better" than another? I claim that the true quality of a visualization can only be measured in the context of a particular purpose. The same image generated from the same data may be excellent for one purpose and abysmal for another. A good measure of visualization quality will correspond to the performance of users in accomplishing the intended purpose, so the "gold standard" is user testing. As a user of visualization software (or at least a consultant to such users) I don't expect visualization software to have been tested in this way for every possible use. In fact, scientific visualization (as distinct from more "production oriented" uses of visualization) will continually encounter new data, new questions and new purposes; user testing can never keep up. User need software they can trust, and advice on appropriate visualizations of particular purposes. Considering the following four processes, and their impact on visualization trustworthiness, reveals important work needed to create worthwhile metrics and benchmarks for visualization. These four processes are (1) complete system testing (user-in-loop), (2) software testing, (3) software design and (4) information dissemination. Additional information is contained in the original extended abstract.

  1. Benchmarking Measures of Network Influence

    PubMed Central

    Bramson, Aaron; Vandermarliere, Benjamin

    2016-01-01

    Identifying key agents for the transmission of diseases (ideas, technology, etc.) across social networks has predominantly relied on measures of centrality on a static base network or a temporally flattened graph of agent interactions. Various measures have been proposed as the best trackers of influence, such as degree centrality, betweenness, and k-shell, depending on the structure of the connectivity. We consider SIR and SIS propagation dynamics on a temporally-extruded network of observed interactions and measure the conditional marginal spread as the change in the magnitude of the infection given the removal of each agent at each time: its temporal knockout (TKO) score. We argue that this TKO score is an effective benchmark measure for evaluating the accuracy of other, often more practical, measures of influence. We find that none of the network measures applied to the induced flat graphs are accurate predictors of network propagation influence on the systems studied; however, temporal networks and the TKO measure provide the requisite targets for the search for effective predictive measures. PMID:27670635

  2. Benchmark problems in computational aeroacoustics

    NASA Technical Reports Server (NTRS)

    Porter-Locklear, Freda

    1994-01-01

    A recent directive at NASA Langley is aimed at numerically predicting principal noise sources. During my summer stay, I worked with high-order ENO code, developed by Dr. Harold Atkins, for solving the unsteady compressible Navier-Stokes equations, as it applies to computational aeroacoustics (CAA). A CAA workshop, composed of six categories of benchmark problems, has been organized to test various numerical properties of code. My task was to determine the robustness of Atkins' code for these test problems. In one category, we tested the nonlinear wave propagation of the code for the one-dimensional Euler equations, with initial pressure, density, and velocity conditions. Using freestream boundary conditions, our results were plausible. In another category, we solved the linearized two-dimensional Euler equations to test the effectiveness of radiation boundary conditions. Here we utilized MAPLE to compute eigenvalues and eigenvectors of the Jacobian given variable and flux vectors. We experienced a minor problem with inflow and outflow boundary conditions. Next, we solved the quasi one dimensional unsteady flow equations with an incoming acoustic wave of amplitude 10(exp -6). The small amplitude sound wave was incident on a convergent-divergent nozzle. After finding a steady-state solution and then marching forward, our solution indicated that after 30 periods the acoustic wave had dissipated (a period is time required for sound wave to traverse one end of nozzle to other end).

  3. Benchmarking Measures of Network Influence

    NASA Astrophysics Data System (ADS)

    Bramson, Aaron; Vandermarliere, Benjamin

    2016-09-01

    Identifying key agents for the transmission of diseases (ideas, technology, etc.) across social networks has predominantly relied on measures of centrality on a static base network or a temporally flattened graph of agent interactions. Various measures have been proposed as the best trackers of influence, such as degree centrality, betweenness, and k-shell, depending on the structure of the connectivity. We consider SIR and SIS propagation dynamics on a temporally-extruded network of observed interactions and measure the conditional marginal spread as the change in the magnitude of the infection given the removal of each agent at each time: its temporal knockout (TKO) score. We argue that this TKO score is an effective benchmark measure for evaluating the accuracy of other, often more practical, measures of influence. We find that none of the network measures applied to the induced flat graphs are accurate predictors of network propagation influence on the systems studied; however, temporal networks and the TKO measure provide the requisite targets for the search for effective predictive measures.

  4. Benchmarking for Bayesian Reinforcement Learning

    PubMed Central

    Ernst, Damien; Couëtoux, Adrien

    2016-01-01

    In the Bayesian Reinforcement Learning (BRL) setting, agents try to maximise the collected rewards while interacting with their environment while using some prior knowledge that is accessed beforehand. Many BRL algorithms have already been proposed, but the benchmarks used to compare them are only relevant for specific cases. The paper addresses this problem, and provides a new BRL comparison methodology along with the corresponding open source library. In this methodology, a comparison criterion that measures the performance of algorithms on large sets of Markov Decision Processes (MDPs) drawn from some probability distributions is defined. In order to enable the comparison of non-anytime algorithms, our methodology also includes a detailed analysis of the computation time requirement of each algorithm. Our library is released with all source code and documentation: it includes three test problems, each of which has two different prior distributions, and seven state-of-the-art RL algorithms. Finally, our library is illustrated by comparing all the available algorithms and the results are discussed. PMID:27304891

  5. Benchmark problems in computational aeroacoustics

    NASA Astrophysics Data System (ADS)

    Porter-Locklear, Freda

    1994-12-01

    A recent directive at NASA Langley is aimed at numerically predicting principal noise sources. During my summer stay, I worked with high-order ENO code, developed by Dr. Harold Atkins, for solving the unsteady compressible Navier-Stokes equations, as it applies to computational aeroacoustics (CAA). A CAA workshop, composed of six categories of benchmark problems, has been organized to test various numerical properties of code. My task was to determine the robustness of Atkins' code for these test problems. In one category, we tested the nonlinear wave propagation of the code for the one-dimensional Euler equations, with initial pressure, density, and velocity conditions. Using freestream boundary conditions, our results were plausible. In another category, we solved the linearized two-dimensional Euler equations to test the effectiveness of radiation boundary conditions. Here we utilized MAPLE to compute eigenvalues and eigenvectors of the Jacobian given variable and flux vectors. We experienced a minor problem with inflow and outflow boundary conditions. Next, we solved the quasi one dimensional unsteady flow equations with an incoming acoustic wave of amplitude 10(exp -6). The small amplitude sound wave was incident on a convergent-divergent nozzle. After finding a steady-state solution and then marching forward, our solution indicated that after 30 periods the acoustic wave had dissipated (a period is time required for sound wave to traverse one end of nozzle to other end).

  6. Developing integrated benchmarks for DOE performance measurement

    SciTech Connect

    Barancik, J.I.; Kramer, C.F.; Thode, Jr. H.C.

    1992-09-30

    The objectives of this task were to describe and evaluate selected existing sources of information on occupational safety and health with emphasis on hazard and exposure assessment, abatement, training, reporting, and control identifying for exposure and outcome in preparation for developing DOE performance benchmarks. Existing resources and methodologies were assessed for their potential use as practical performance benchmarks. Strengths and limitations of current data resources were identified. Guidelines were outlined for developing new or improved performance factors, which then could become the basis for selecting performance benchmarks. Data bases for non-DOE comparison populations were identified so that DOE performance could be assessed relative to non-DOE occupational and industrial groups. Systems approaches were described which can be used to link hazards and exposure, event occurrence, and adverse outcome factors, as needed to generate valid, reliable, and predictive performance benchmarks. Data bases were identified which contain information relevant to one or more performance assessment categories . A list of 72 potential performance benchmarks was prepared to illustrate the kinds of information that can be produced through a benchmark development program. Current information resources which may be used to develop potential performance benchmarks are limited. There is need to develop an occupational safety and health information and data system in DOE, which is capable of incorporating demonstrated and documented performance benchmarks prior to, or concurrent with the development of hardware and software. A key to the success of this systems approach is rigorous development and demonstration of performance benchmark equivalents to users of such data before system hardware and software commitments are institutionalized.

  7. Consistency and Magnitude of Differences in Reading Curriculum-Based Measurement Slopes in Benchmark versus Strategic Monitoring

    ERIC Educational Resources Information Center

    Mercer, Sterett H.; Keller-Margulis, Milena A.

    2015-01-01

    Differences in oral reading curriculum-based measurement (R-CBM) slopes based on two commonly used progress monitoring practices in field-based data were compared in this study. Semester-specific R-CBM slopes were calculated for 150 Grade 1 and 2 students who completed benchmark (i.e., 3 R-CBM probes collected 3 times per year) and strategic…

  8. Updates to the integrated protein-protein interaction benchmarks: Docking benchmark version 5 and affinity benchmark version 2

    PubMed Central

    Vreven, Thom; Moal, Iain H.; Vangone, Anna; Pierce, Brian G.; Kastritis, Panagiotis L.; Torchala, Mieczyslaw; Chaleil, Raphael; Jiménez-García, Brian; Bates, Paul A.; Fernandez-Recio, Juan; Bonvin, Alexandre M.J.J.; Weng, Zhiping

    2015-01-01

    We present an updated and integrated version of our widely used protein-protein docking and binding affinity benchmarks. The benchmarks consist of non-redundant, high quality structures of protein-protein complexes along with the unbound structures of their components. Fifty-five new complexes were added to the docking benchmark, 35 of which have experimentally-measured binding affinities. These updated docking and affinity benchmarks now contain 230 and 179 entries, respectively. In particular, the number of antibody-antigen complexes has increased significantly, by 67% and 74% in the docking and affinity benchmarks, respectively. We tested previously developed docking and affinity prediction algorithms on the new cases. Considering only the top ten docking predictions per benchmark case, a prediction accuracy of 38% is achieved on all 55 cases, and up to 50% for the 32 rigid-body cases only. Predicted affinity scores are found to correlate with experimental binding energies up to r=0.52 overall, and r=0.72 for the rigid complexes. PMID:26231283

  9. The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS)

    PubMed Central

    Jakab, Andras; Bauer, Stefan; Kalpathy-Cramer, Jayashree; Farahani, Keyvan; Kirby, Justin; Burren, Yuliya; Porz, Nicole; Slotboom, Johannes; Wiest, Roland; Lanczi, Levente; Gerstner, Elizabeth; Weber, Marc-André; Arbel, Tal; Avants, Brian B.; Ayache, Nicholas; Buendia, Patricia; Collins, D. Louis; Cordier, Nicolas; Corso, Jason J.; Criminisi, Antonio; Das, Tilak; Delingette, Hervé; Demiralp, Çağatay; Durst, Christopher R.; Dojat, Michel; Doyle, Senan; Festa, Joana; Forbes, Florence; Geremia, Ezequiel; Glocker, Ben; Golland, Polina; Guo, Xiaotao; Hamamci, Andac; Iftekharuddin, Khan M.; Jena, Raj; John, Nigel M.; Konukoglu, Ender; Lashkari, Danial; Mariz, José António; Meier, Raphael; Pereira, Sérgio; Precup, Doina; Price, Stephen J.; Raviv, Tammy Riklin; Reza, Syed M. S.; Ryan, Michael; Sarikaya, Duygu; Schwartz, Lawrence; Shin, Hoo-Chang; Shotton, Jamie; Silva, Carlos A.; Sousa, Nuno; Subbanna, Nagesh K.; Szekely, Gabor; Taylor, Thomas J.; Thomas, Owen M.; Tustison, Nicholas J.; Unal, Gozde; Vasseur, Flor; Wintermark, Max; Ye, Dong Hye; Zhao, Liang; Zhao, Binsheng; Zikic, Darko; Prastawa, Marcel; Reyes, Mauricio; Van Leemput, Koen

    2016-01-01

    In this paper we report the set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences. Twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low- and high-grade glioma patients—manually annotated by up to four raters—and to 65 comparable scans generated using tumor image simulation software. Quantitative evaluations revealed considerable disagreement between the human raters in segmenting various tumor sub-regions (Dice scores in the range 74%–85%), illustrating the difficulty of this task. We found that different algorithms worked best for different sub-regions (reaching performance comparable to human inter-rater variability), but that no single algorithm ranked in the top for all sub-regions simultaneously. Fusing several good algorithms using a hierarchical majority vote yielded segmentations that consistently ranked above all individual algorithms, indicating remaining opportunities for further methodological improvements. The BRATS image data and manual annotations continue to be publicly available through an online evaluation system as an ongoing benchmarking resource. PMID:25494501

  10. The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS).

    PubMed

    Menze, Bjoern H; Jakab, Andras; Bauer, Stefan; Kalpathy-Cramer, Jayashree; Farahani, Keyvan; Kirby, Justin; Burren, Yuliya; Porz, Nicole; Slotboom, Johannes; Wiest, Roland; Lanczi, Levente; Gerstner, Elizabeth; Weber, Marc-André; Arbel, Tal; Avants, Brian B; Ayache, Nicholas; Buendia, Patricia; Collins, D Louis; Cordier, Nicolas; Corso, Jason J; Criminisi, Antonio; Das, Tilak; Delingette, Hervé; Demiralp, Çağatay; Durst, Christopher R; Dojat, Michel; Doyle, Senan; Festa, Joana; Forbes, Florence; Geremia, Ezequiel; Glocker, Ben; Golland, Polina; Guo, Xiaotao; Hamamci, Andac; Iftekharuddin, Khan M; Jena, Raj; John, Nigel M; Konukoglu, Ender; Lashkari, Danial; Mariz, José Antonió; Meier, Raphael; Pereira, Sérgio; Precup, Doina; Price, Stephen J; Raviv, Tammy Riklin; Reza, Syed M S; Ryan, Michael; Sarikaya, Duygu; Schwartz, Lawrence; Shin, Hoo-Chang; Shotton, Jamie; Silva, Carlos A; Sousa, Nuno; Subbanna, Nagesh K; Szekely, Gabor; Taylor, Thomas J; Thomas, Owen M; Tustison, Nicholas J; Unal, Gozde; Vasseur, Flor; Wintermark, Max; Ye, Dong Hye; Zhao, Liang; Zhao, Binsheng; Zikic, Darko; Prastawa, Marcel; Reyes, Mauricio; Van Leemput, Koen

    2015-10-01

    In this paper we report the set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences. Twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low- and high-grade glioma patients-manually annotated by up to four raters-and to 65 comparable scans generated using tumor image simulation software. Quantitative evaluations revealed considerable disagreement between the human raters in segmenting various tumor sub-regions (Dice scores in the range 74%-85%), illustrating the difficulty of this task. We found that different algorithms worked best for different sub-regions (reaching performance comparable to human inter-rater variability), but that no single algorithm ranked in the top for all sub-regions simultaneously. Fusing several good algorithms using a hierarchical majority vote yielded segmentations that consistently ranked above all individual algorithms, indicating remaining opportunities for further methodological improvements. The BRATS image data and manual annotations continue to be publicly available through an online evaluation system as an ongoing benchmarking resource.

  11. Benchmarking of Optical Dimerizer Systems

    PubMed Central

    2015-01-01

    Optical dimerizers are a powerful new class of optogenetic tools that allow light-inducible control of protein–protein interactions. Such tools have been useful for regulating cellular pathways and processes with high spatiotemporal resolution in live cells, and a growing number of dimerizer systems are available. As these systems have been characterized by different groups using different methods, it has been difficult for users to compare their properties. Here, we set about to systematically benchmark the properties of four optical dimerizer systems, CRY2/CIB1, TULIPs, phyB/PIF3, and phyB/PIF6. Using a yeast transcriptional assay, we find significant differences in light sensitivity and fold-activation levels between the red light regulated systems but similar responses between the CRY2/CIB and TULIP systems. Further comparison of the ability of the CRY2/CIB1 and TULIP systems to regulate a yeast MAPK signaling pathway also showed similar responses, with slightly less background activity in the dark observed with CRY2/CIB. In the process of developing this work, we also generated an improved blue-light-regulated transcriptional system using CRY2/CIB in yeast. In addition, we demonstrate successful application of the CRY2/CIB dimerizers using a membrane-tethered CRY2, which may allow for better local control of protein interactions. Taken together, this work allows for a better understanding of the capacities of these different dimerization systems and demonstrates new uses of these dimerizers to control signaling and transcription in yeast. PMID:25350266

  12. Benchmarking of optical dimerizer systems.

    PubMed

    Pathak, Gopal P; Strickland, Devin; Vrana, Justin D; Tucker, Chandra L

    2014-11-21

    Optical dimerizers are a powerful new class of optogenetic tools that allow light-inducible control of protein-protein interactions. Such tools have been useful for regulating cellular pathways and processes with high spatiotemporal resolution in live cells, and a growing number of dimerizer systems are available. As these systems have been characterized by different groups using different methods, it has been difficult for users to compare their properties. Here, we set about to systematically benchmark the properties of four optical dimerizer systems, CRY2/CIB1, TULIPs, phyB/PIF3, and phyB/PIF6. Using a yeast transcriptional assay, we find significant differences in light sensitivity and fold-activation levels between the red light regulated systems but similar responses between the CRY2/CIB and TULIP systems. Further comparison of the ability of the CRY2/CIB1 and TULIP systems to regulate a yeast MAPK signaling pathway also showed similar responses, with slightly less background activity in the dark observed with CRY2/CIB. In the process of developing this work, we also generated an improved blue-light-regulated transcriptional system using CRY2/CIB in yeast. In addition, we demonstrate successful application of the CRY2/CIB dimerizers using a membrane-tethered CRY2, which may allow for better local control of protein interactions. Taken together, this work allows for a better understanding of the capacities of these different dimerization systems and demonstrates new uses of these dimerizers to control signaling and transcription in yeast. PMID:25350266

  13. Numerical methods: Analytical benchmarking in transport theory

    SciTech Connect

    Ganapol, B.D. )

    1988-01-01

    Numerical methods applied to reactor technology have reached a high degree of maturity. Certainly one- and two-dimensional neutron transport calculations have become routine, with several programs available on personal computer and the most widely used programs adapted to workstation and minicomputer computational environments. With the introduction of massive parallelism and as experience with multitasking increases, even more improvement in the development of transport algorithms can be expected. Benchmarking an algorithm is usually not a very pleasant experience for the code developer. Proper algorithmic verification by benchmarking involves the following considerations: (1) conservation of particles, (2) confirmation of intuitive physical behavior, and (3) reproduction of analytical benchmark results. By using today's computational advantages, new basic numerical methods have been developed that allow a wider class of benchmark problems to be considered.

  14. DOE Commercial Building Benchmark Models: Preprint

    SciTech Connect

    Torcelini, P.; Deru, M.; Griffith, B.; Benne, K.; Halverson, M.; Winiarski, D.; Crawley, D. B.

    2008-07-01

    To provide a consistent baseline of comparison and save time conducting such simulations, the U.S. Department of Energy (DOE) has developed a set of standard benchmark building models. This paper will provide an executive summary overview of these benchmark buildings, and how they can save building analysts valuable time. Fully documented and implemented to use with the EnergyPlus energy simulation program, the benchmark models are publicly available and new versions will be created to maintain compatibility with new releases of EnergyPlus. The benchmark buildings will form the basis for research on specific building technologies, energy code development, appliance standards, and measurement of progress toward DOE energy goals. Having a common starting point allows us to better share and compare research results and move forward to make more energy efficient buildings.

  15. Benchmark calculations from summarized data: an example

    SciTech Connect

    Crump, K. S.; Teeguarden, Justin G.

    2009-03-01

    Benchmark calculations often are made from data extracted from publications. Such datamay not be in a formmost appropriate for benchmark analysis, and, as a result, suboptimal and/or non-standard benchmark analyses are often applied. This problem can be mitigated in some cases using Monte Carlo computational methods that allow the likelihood of the published data to be calculated while still using an appropriate benchmark dose (BMD) definition. Such an approach is illustrated herein using data from a study of workers exposed to styrene, in which a hybrid BMD calculation is implemented from dose response data reported only as means and standard deviations of ratios of scores on neuropsychological tests from exposed subjects to corresponding scores from matched controls. The likelihood of the data is computed using a combination of analytic and Monte Carlo integration methods.

  16. Benchmarking antimicrobial drug use in hospitals.

    PubMed

    Ibrahim, Omar M; Polk, Ron E

    2012-04-01

    Measuring and monitoring antibiotic use in hospitals is believed to be an important component of the strategies available to antimicrobial stewardship programs to address acquired antimicrobial resistance. Recent efforts to organize large numbers of hospitals into networks allow for interhospital comparisons of a variety of healthcare processes and outcomes, a process often called 'benchmarking'. For comparisons of antimicrobial use to be valid, usage figures must be risk-adjusted to account for differences in patient mix and hospital characteristics. The purpose of this review is to describe recent methods to benchmark antimicrobial drug use and to critically assess the potential advantages and the remaining challenges. While many methodological challenges remain, and the clinical outcomes resulting from benchmarking programs have yet to be determined, recent developments suggest that benchmarking antimicrobial drug use will become an important component of antimicrobial stewardship program activities.

  17. Benchmarking ENDF/B-VII.0

    NASA Astrophysics Data System (ADS)

    van der Marck, Steven C.

    2006-12-01

    The new major release VII.0 of the ENDF/B nuclear data library has been tested extensively using benchmark calculations. These were based upon MCNP-4C3 continuous-energy Monte Carlo neutronics simulations, together with nuclear data processed using the code NJOY. Three types of benchmarks were used, viz., criticality safety benchmarks, (fusion) shielding benchmarks, and reference systems for which the effective delayed neutron fraction is reported. For criticality safety, more than 700 benchmarks from the International Handbook of Criticality Safety Benchmark Experiments were used. Benchmarks from all categories were used, ranging from low-enriched uranium, compound fuel, thermal spectrum ones (LEU-COMP-THERM), to mixed uranium-plutonium, metallic fuel, fast spectrum ones (MIX-MET-FAST). For fusion shielding many benchmarks were based on IAEA specifications for the Oktavian experiments (for Al, Co, Cr, Cu, LiF, Mn, Mo, Si, Ti, W, Zr), Fusion Neutronics Source in Japan (for Be, C, N, O, Fe, Pb), and Pulsed Sphere experiments at Lawrence Livermore National Laboratory (for 6Li, 7Li, Be, C, N, O, Mg, Al, Ti, Fe, Pb, D 2O, H 2O, concrete, polyethylene and teflon). For testing delayed neutron data more than thirty measurements in widely varying systems were used. Among these were measurements in the Tank Critical Assembly (TCA in Japan) and IPEN/MB-01 (Brazil), both with a thermal spectrum, and two cores in Masurca (France) and three cores in the Fast Critical Assembly (FCA, Japan), all with fast spectra. In criticality safety, many benchmarks were chosen from the category with a thermal spectrum, low-enriched uranium, compound fuel (LEU-COMP-THERM), because this is typical of most current-day reactors, and because these benchmarks were previously underpredicted by as much as 0.5% by most nuclear data libraries (such as ENDF/B-VI.8, JEFF-3.0). The calculated results presented here show that this underprediction is no longer there for ENDF/B-VII.0. The average over 257

  18. Toward Scalable Benchmarks for Mass Storage Systems

    NASA Technical Reports Server (NTRS)

    Miller, Ethan L.

    1996-01-01

    This paper presents guidelines for the design of a mass storage system benchmark suite, along with preliminary suggestions for programs to be included. The benchmarks will measure both peak and sustained performance of the system as well as predicting both short- and long-term behavior. These benchmarks should be both portable and scalable so they may be used on storage systems from tens of gigabytes to petabytes or more. By developing a standard set of benchmarks that reflect real user workload, we hope to encourage system designers and users to publish performance figures that can be compared with those of other systems. This will allow users to choose the system that best meets their needs and give designers a tool with which they can measure the performance effects of improvements to their systems.

  19. NAS Grid Benchmarks. 1.0

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob; Frumkin, Michael; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    We provide a paper-and-pencil specification of a benchmark suite for computational grids. It is based on the NAS (NASA Advanced Supercomputing) Parallel Benchmarks (NPB) and is called the NAS Grid Benchmarks (NGB). NGB problems are presented as data flow graphs encapsulating an instance of a slightly modified NPB task in each graph node, which communicates with other nodes by sending/receiving initialization data. Like NPB, NGB specifies several different classes (problem sizes). In this report we describe classes S, W, and A, and provide verification values for each. The implementor has the freedom to choose any language, grid environment, security model, fault tolerance/error correction mechanism, etc., as long as the resulting implementation passes the verification test and reports the turnaround time of the benchmark.

  20. Pollution prevention opportunity assessment benchmarking: Recommendations for Hanford

    SciTech Connect

    Engel, J.A.

    1994-05-01

    Pollution Prevention Opportunity Assessments (P2OAs) are an important first step in any pollution prevention program. While P2OAs have been and are being conducted at Hanford, there exists no standard guidance, training, tracking, or systematic approach to identifying and addressing the most important waste streams. The purpose of this paper then is to serve as a guide to the Pollution Prevention group at Westinghouse Hanford in developing and implementing P2OAs at Hanford. By searching the literature and benchmarks other sites and agencies, the best elements from those programs can be incorporated and pitfalls more easily avoided. This search began with the 1988 document that introduces P2OAs (then called Process Waste Assessments, PWAS) by the Environmental Protection Agency. This important document presented the basic framework of P20A features which appeared in almost all later programs. Major Department of Energy programs were also examined, with particular attention to the Defense Programs P20A method of a graded approach, as presented at the Kansas City Plant. The graded approach is a system of conducting P2OAs of varying levels of detail depending on the size and importance of the waste stream. Finally, private industry programs were examined briefly. While all the benchmarked programs had excellent features, it was determined that the size and mission of Hanford precluded lifting any one program for use. Thus, a series of recommendations were made, based on the literature review, in order to begin an extensive program of P2OAs at Hanford. These recommendations are in the areas of: facility Pollution Prevention teams, P20A scope and methodology, guidance documents, training for facilities (and management), technical and informational support, tracking and measuring success, and incentives.

  1. A framework for benchmarking land models

    SciTech Connect

    Luo, Yiqi; Randerson, James T.; Hoffman, Forrest; Norby, Richard J

    2012-01-01

    Land models, which have been developed by the modeling community in the past few decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land model performances and, meanwhile, highlights major challenges at this infant stage of benchmark analysis. The framework includes (1) targeted aspects of model performance to be evaluated, (2) a set of benchmarks as defined references to test model performance, (3) metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies, and (4) model improvement. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and land surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics in response to climate change across broad temporal and spatial scales. Thus, one major challenge is to select and define a limited number of benchmarks to effectively evaluate land model performance. The second challenge is to develop metrics of measuring mismatches between models and benchmarks. The metrics may include (1) a priori thresholds of acceptable model performance and (2) a scoring system to combine data model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance to guide future development, thus enabling improved predictions of future states of ecosystems and climate. The near-future research effort should be on development of a set of widely acceptable benchmarks that can be used to objectively, effectively, and reliably evaluate fundamental properties of land models

  2. Simple Benchmark Specifications for Space Radiation Protection

    NASA Technical Reports Server (NTRS)

    Singleterry, Robert C. Jr.; Aghara, Sukesh K.

    2013-01-01

    This report defines space radiation benchmark specifications. This specification starts with simple, monoenergetic, mono-directional particles on slabs and progresses to human models in spacecraft. This report specifies the models and sources needed to what the team performing the benchmark needs to produce in a report. Also included are brief descriptions of how OLTARIS, the NASA Langley website for space radiation analysis, performs its analysis.

  3. Implementation of NAS Parallel Benchmarks in Java

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; Schultz, Matthew; Jin, Hao-Qiang; Yan, Jerry

    2000-01-01

    A number of features make Java an attractive but a debatable choice for High Performance Computing (HPC). In order to gauge the applicability of Java to the Computational Fluid Dynamics (CFD) we have implemented NAS Parallel Benchmarks in Java. The performance and scalability of the benchmarks point out the areas where improvement in Java compiler technology and in Java thread implementation would move Java closer to Fortran in the competition for CFD applications.

  4. VENUS-2 Experimental Benchmark Analysis

    SciTech Connect

    Pavlovichev, A.M.

    2001-09-28

    The VENUS critical facility is a zero power reactor located at SCK-CEN, Mol, Belgium, which for the VENUS-2 experiment utilized a mixed-oxide core with near-weapons-grade plutonium. In addition to the VENUS-2 Core, additional computational variants based on each type of fuel cycle VENUS-2 core (3.3 wt. % UO{sub 2}, 4.0 wt. % UO{sub 2}, and 2.0/2.7 wt.% MOX) were also calculated. The VENUS-2 critical configuration and cell variants have been calculated with MCU-REA, which is a continuous energy Monte Carlo code system developed at Russian Research Center ''Kurchatov Institute'' and is used extensively in the Fissile Materials Disposition Program. The calculations resulted in a k{sub eff} of 0.99652 {+-} 0.00025 and relative pin powers within 2% for UO{sub 2} pins and 3% for MOX pins of the experimental values.

  5. Benchmarking for Cost Improvement. Final report

    SciTech Connect

    Not Available

    1993-09-01

    The US Department of Energy`s (DOE) Office of Environmental Restoration and Waste Management (EM) conducted the Benchmarking for Cost Improvement initiative with three objectives: Pilot test benchmarking as an EM cost improvement tool; identify areas for cost improvement and recommend actions to address these areas; provide a framework for future cost improvement. The benchmarking initiative featured the use of four principal methods (program classification, nationwide cost improvement survey, paired cost comparison and component benchmarking). Interested parties contributed during both the design and execution phases. The benchmarking initiative was conducted on an accelerated basis. Of necessity, it considered only a limited set of data that may not be fully representative of the diverse and complex conditions found at the many DOE installations. The initiative generated preliminary data about cost differences and it found a high degree of convergence on several issues. Based on this convergence, the report recommends cost improvement strategies and actions. This report describes the steps taken as part of the benchmarking initiative and discusses the findings and recommended actions for achieving cost improvement. The results and summary recommendations, reported below, are organized by the study objectives.

  6. Machine characterization and benchmark performance prediction

    NASA Technical Reports Server (NTRS)

    Saavedra-Barrera, Rafael H.

    1988-01-01

    From runs of standard benchmarks or benchmark suites, it is not possible to characterize the machine nor to predict the run time of other benchmarks which have not been run. A new approach to benchmarking and machine characterization is reported. The creation and use of a machine analyzer is described, which measures the performance of a given machine on FORTRAN source language constructs. The machine analyzer yields a set of parameters which characterize the machine and spotlight its strong and weak points. Also described is a program analyzer, which analyzes FORTRAN programs and determines the frequency of execution of each of the same set of source language operations. It is then shown that by combining a machine characterization and a program characterization, we are able to predict with good accuracy the run time of a given benchmark on a given machine. Characterizations are provided for the Cray-X-MP/48, Cyber 205, IBM 3090/200, Amdahl 5840, Convex C-1, VAX 8600, VAX 11/785, VAX 11/780, SUN 3/50, and IBM RT-PC/125, and for the following benchmark programs or suites: Los Alamos (BMK8A1), Baskett, Linpack, Livermore Loops, Madelbrot Set, NAS Kernels, Shell Sort, Smith, Whetstone and Sieve of Erathostenes.

  7. Action-Oriented Benchmarking: Concepts and Tools

    SciTech Connect

    California Energy Commission; Mathew, Paul; Mills, Evan; Mathew, Paul; Piette, Mary Ann; Bourassa, Norman; Brook, Martha

    2008-02-13

    Most energy benchmarking tools provide static feedback on how one building compares to a larger set of loosely similar buildings, without providing information at the end-use level or on what can be done to reduce consumption, cost, or emissions. In this article--Part 1 of a two-part series--we describe an 'action-oriented benchmarking' approach, which extends whole-building energy benchmarking to include analysis of system and component energy use metrics and features. Action-oriented benchmarking thereby allows users to generate more meaningful metrics and to identify, screen and prioritize potential efficiency improvements. This opportunity assessment process can then be used to inform and optimize a full-scale audit or commissioning process. We introduce a new web-based action-oriented benchmarking system and associated software tool-EnergyIQ. The benchmarking methods, visualizations, and user interface design are informed by an end-user needs assessment survey and best-practice guidelines from ASHRAE.

  8. Benchmarking initiatives in the water industry.

    PubMed

    Parena, R; Smeets, E

    2001-01-01

    Customer satisfaction and service care are every day pushing professionals in the water industry to seek to improve their performance, lowering costs and increasing the provided service level. Process Benchmarking is generally recognised as a systematic mechanism of comparing one's own utility with other utilities or businesses with the intent of self-improvement by adopting structures or methods used elsewhere. The IWA Task Force on Benchmarking, operating inside the Statistics and Economics Committee, has been committed to developing a general accepted concept of Process Benchmarking to support water decision-makers in addressing issues of efficiency. In a first step the Task Force disseminated among the Committee members a questionnaire focused on providing suggestions about the kind, the evolution degree and the main concepts of Benchmarking adopted in the represented Countries. A comparison among the guidelines adopted in The Netherlands and Scandinavia has recently challenged the Task Force in drafting a methodology for a worldwide process benchmarking in water industry. The paper provides a framework of the most interesting benchmarking experiences in the water sector and describes in detail both the final results of the survey and the methodology focused on identification of possible improvement areas. PMID:11547972

  9. EDITORIAL: Selected papers from the 10th International Workshop on Micro and Nanotechnology for Power Generation and Energy Conversion Applications (PowerMEMS 2010) Selected papers from the 10th International Workshop on Micro and Nanotechnology for Power Generation and Energy Conversion Applications (PowerMEMS 2010)

    NASA Astrophysics Data System (ADS)

    Reynaerts, Dominiek; Vullers, Ruud

    2011-10-01

    This special section of Journal of Micromechanics and Microengineering features papers selected from the 10th International Workshop on Micro and Nanotechnology for Power Generation and Energy Conversion Applications (PowerMEMS 2010). The workshop was organized in Leuven, Belgium from 30 November to 3 December 2010 by Katholieke Universiteit Leuven and the imec/Holst Centre. This was a special PowerMEMS Workshop, for several reasons. First of all, we celebrated the 10th anniversary of the workshop: the first PowerMEMS meeting was organized in Sendai, Japan in 2000. None of the organizers or participants of this first meeting could have predicted the impact of the workshop over the next decade. The second reason was that, for the first time, the conference organization spanned two countries: Belgium and the Netherlands. Thanks to the advances in information technology, teams from Katholieke Universiteit Leuven (Belgium) and the imec/Holst Centre in Eindhoven (the Netherlands) have been able to work together seamlessly as one team. The objective of the PowerMEMS Workshop is to stimulate innovation in micro and nanotechnology for power generation and energy conversion applications. Its scope ranges from integrated microelectromechanical systems (MEMS) for power generation, dissipation, harvesting, and management, to novel nanostructures and materials for energy-related applications. True to the objective of the PowerMEMSWorkshop, the 2010 technical program covered a broad range of energy related research, ranging from the nanometer to the millimeter scale, discussed in 5 invited and 52 oral presentations, and 112 posters. This special section includes 14 papers covering vibration energy harvesters, thermal applications and micro power systems. Finally, we wish to express sincere appreciation to the members of the International Steering Committee, the Technical Program Committee and last but not least the Local Organizing Committee. This special issue was edited in

  10. Introduction to the special issue on the joint meeting of the 19th IEEE International Symposium on the Applications of Ferroelectrics and the 10th European Conference on the Applications of Polar Dielectrics.

    PubMed

    Tsurumi, Takaaki

    2011-09-01

    The joint meeting of the 19th IEEE International Symposium on the Applications of Ferroelectrics and the 10th European Conference on the Applications of Polar Dielectrics took place in Edinburgh from August 9-12, 2010. The conference was attended by 390 delegates from more than 40 different countries. There were 4 plenary speakers, 56 invited speakers, and a further 222 contributed oral presentations in 7 parallel session. In addition there were 215 poster presentations. Key topics addressed at the conference included piezoelectric materials, leadfree piezoelectrics, and multiferroics.

  11. Raising Quality and Achievement. A College Guide to Benchmarking.

    ERIC Educational Resources Information Center

    Owen, Jane

    This booklet introduces the principles and practices of benchmarking as a way of raising quality and achievement at further education colleges in Britain. Section 1 defines the concept of benchmarking. Section 2 explains what benchmarking is not and the steps that should be taken before benchmarking is initiated. The following aspects and…

  12. 42 CFR 440.330 - Benchmark health benefits coverage.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Benchmark health benefits coverage. 440.330 Section 440.330 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is...

  13. 42 CFR 440.330 - Benchmark health benefits coverage.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 4 2012-10-01 2012-10-01 false Benchmark health benefits coverage. 440.330 Section 440.330 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is...

  14. 42 CFR 440.330 - Benchmark health benefits coverage.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 4 2011-10-01 2011-10-01 false Benchmark health benefits coverage. 440.330 Section 440.330 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is...

  15. 42 CFR 414.1255 - Benchmarks for cost measures.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 3 2013-10-01 2013-10-01 false Benchmarks for cost measures. 414.1255 Section 414... Payment Modifier Under the Physician Fee Schedule § 414.1255 Benchmarks for cost measures. The benchmark...-based payment modifier. In calculating the national benchmark, groups of physicians' performance...

  16. 45 CFR 156.110 - EHB-benchmark plan standards.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 45 Public Welfare 1 2013-10-01 2013-10-01 false EHB-benchmark plan standards. 156.110 Section 156... Essential Health Benefits Package § 156.110 EHB-benchmark plan standards. An EHB-benchmark plan must meet..., including oral and vision care. (b) Coverage in each benefit category. A base-benchmark plan not...

  17. 45 CFR 156.110 - EHB-benchmark plan standards.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 45 Public Welfare 1 2014-10-01 2014-10-01 false EHB-benchmark plan standards. 156.110 Section 156... Essential Health Benefits Package § 156.110 EHB-benchmark plan standards. An EHB-benchmark plan must meet..., including oral and vision care. (b) Coverage in each benefit category. A base-benchmark plan not...

  18. Storage-Intensive Supercomputing Benchmark Study

    SciTech Connect

    Cohen, J; Dossa, D; Gokhale, M; Hysom, D; May, J; Pearce, R; Yoo, A

    2007-10-30

    Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe: (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows: SuperMicro X7

  19. Benchmarking local healthcare-associated infections: available benchmarks and interpretation challenges.

    PubMed

    El-Saed, Aiman; Balkhy, Hanan H; Weber, David J

    2013-10-01

    Growing numbers of healthcare facilities are routinely collecting standardized data on healthcare-associated infection (HAI), which can be used not only to track internal performance but also to compare local data to national and international benchmarks. Benchmarking overall (crude) HAI surveillance metrics without accounting or adjusting for potential confounders can result in misleading conclusions. Methods commonly used to provide risk-adjusted metrics include multivariate logistic regression analysis, stratification, indirect standardization, and restrictions. The characteristics of recognized benchmarks worldwide, including the advantages and limitations are described. The choice of the right benchmark for the data from the Gulf Cooperation Council (GCC) states is challenging. The chosen benchmark should have similar data collection and presentation methods. Additionally, differences in surveillance environments including regulations should be taken into consideration when considering such a benchmark. The GCC center for infection control took some steps to unify HAI surveillance systems in the region. GCC hospitals still need to overcome legislative and logistic difficulties in sharing data to create their own benchmark. The availability of a regional GCC benchmark may better enable health care workers and researchers to obtain more accurate and realistic comparisons.

  20. The VTE Benchmarking Model: Benchmarking Quality Performance in Vocational Technical Education.

    ERIC Educational Resources Information Center

    Losh, Charles

    1993-01-01

    Discusses benchmarking--finding and implementing the best practices--in business and industry and describes a model that can be used in vocational-technical education. Suggests that benchmarking is a tool that can be used by vocational-technical educators as they strive for excellence. (JOW)

  1. Test Nationally, Benchmark Locally: Using Local DIBELS Benchmarks to Predict Performance on the Pssa

    ERIC Educational Resources Information Center

    Ferchalk, Matthew R.

    2013-01-01

    The Dynamic Indicators of Basic Early Literacy Skills (DIBELS) benchmarks are frequently used to make important decision regarding student performance. More information, however, is needed to understand if the nationally-derived benchmarks created by the DIBELS system provide the most accurate criterion for evaluating reading proficiency. The…

  2. Benchmark 2 - Springback of a draw / re-draw panel: Part C: Benchmark analysis

    NASA Astrophysics Data System (ADS)

    Carsley, John E.; Xia, Cedric; Yang, Lianxiang; Stoughton, Thomas B.; Xu, Siguang; Hartfield-Wünsch, Susan E.; Li, Jingjing

    2013-12-01

    Benchmark analysis is summarized for DP600 and AA 5182-O. Nine simulation results submitted for this benchmark study are compared to the physical measurement results. The details on the codes, friction parameters, mesh technology, CPU, and material models are also summarized at the end of this report with the participant information details.

  3. Action-Oriented Benchmarking: Using the CEUS Database to Benchmark Commercial Buildings in California

    SciTech Connect

    Mathew, Paul; Mills, Evan; Bourassa, Norman; Brook, Martha

    2008-02-01

    The 2006 Commercial End Use Survey (CEUS) database developed by the California Energy Commission is a far richer source of energy end-use data for non-residential buildings than has previously been available and opens the possibility of creating new and more powerful energy benchmarking processes and tools. In this article--Part 2 of a two-part series--we describe the methodology and selected results from an action-oriented benchmarking approach using the new CEUS database. This approach goes beyond whole-building energy benchmarking to more advanced end-use and component-level benchmarking that enables users to identify and prioritize specific energy efficiency opportunities - an improvement on benchmarking tools typically in use today.

  4. Toxicological benchmarks for wildlife: 1994 Revision

    SciTech Connect

    Opresko, D.M.; Sample, B.E.; Suter, G.W. II

    1994-09-01

    The process by which ecological risks of environmental contaminants are evaluated is two-tiered. The first tier is a screening assessment where concentrations of contaminants in the environment are compared to toxicological benchmarks which represent concentrations of chemicals in environmental media (water, sediment, soil, food, etc.) that are presumed to be nonhazardous to the surrounding biota. The second tier is a baseline ecological risk assessment where toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. The report presents toxicological benchmarks for assessment of effects of 76 chemicals on 8 representative mammalian wildlife species and 31 chemicals on 9 avian wildlife species. The chemicals are some of those that occur at United States Department of Energy waste sites; the wildlife species were chosen because they are widely distributed and provide a representative range of body sizes and diets. Further descriptions of the chosen wildlife species and chemicals are provided in the report. The benchmarks presented in this report represent values believed to be nonhazardous for the listed wildlife species. These benchmarks only consider contaminant exposure through oral ingestion of contaminated media; exposure through inhalation or direct dermal exposure are not considered in this report.

  5. Energy benchmarking of South Australian WWTPs.

    PubMed

    Krampe, J

    2013-01-01

    Optimising the energy consumption and energy generation of wastewater treatment plants (WWTPs) is a topic with increasing importance for water utilities in times of rising energy costs and pressures to reduce greenhouse gas (GHG) emissions. Assessing the energy efficiency and energy optimisation of a WWTP are difficult tasks as most plants vary greatly in size, process layout and other influencing factors. To overcome these limits it is necessary to compare energy efficiency with a statistically relevant base to identify shortfalls and optimisation potential. Such energy benchmarks have been successfully developed and used in central Europe over the last two decades. This paper demonstrates how the latest available energy benchmarks from Germany have been applied to 24 WWTPs in South Australia. It shows how energy benchmarking can be used to identify shortfalls in current performance, prioritise detailed energy assessments and help inform decisions on capital investment.

  6. Benchmark field study of deep neutron penetration

    SciTech Connect

    Morgan, J.F.; Sale, K. ); Gold, R.; Roberts, J.H.; Preston, C.C. )

    1991-06-10

    A unique benchmark neutron field has been established at the Lawrence Livermore National Laboratory (LLNL) to study deep penetration neutron transport. At LLNL, a tandem accelerator is used to generate a monoenergetic neutron source that permits investigation of deep neutron penetration under conditions that are virtually ideal to model, namely the transport of mono-energetic neutrons through a single material in a simple geometry. General features of the Lawrence Tandem (LATAN) benchmark field are described with emphasis on neutron source characteristics and room return background. The single material chosen for the first benchmark, LATAN-1, is a steel representative of Light Water Reactor (LWR) Pressure Vessels (PV). Also included is a brief description of the Little Boy replica, a critical reactor assembly designed to mimic the radiation doses from the atomic bomb dropped on Hiroshima, and its us in neutron spectrometry. 18 refs.

  7. New Test Set for Video Quality Benchmarking

    NASA Astrophysics Data System (ADS)

    Raventos, Joaquin

    A new test set design and benchmarking approach (US Patent pending) allows a "standard observer" to assess the end-to-end image quality characteristics of video imaging systems operating in day time or low-light conditions. It uses randomized targets based on extensive application of Photometry, Geometrical Optics, and Digital Media. The benchmarking takes into account the target's contrast sensitivity, its color characteristics, and several aspects of human vision such as visual acuity and dynamic response. The standard observer is part of the "extended video imaging system" (EVIS). The new test set allows image quality benchmarking by a panel of standard observers at the same time. The new approach shows that an unbiased assessment can be guaranteed. Manufacturers, system integrators, and end users will assess end-to-end performance by simulating a choice of different colors, luminance levels, and dynamic conditions in the laboratory or in permanent video systems installations.

  8. Standardized benchmarking in the quest for orthologs.

    PubMed

    Altenhoff, Adrian M; Boeckmann, Brigitte; Capella-Gutierrez, Salvador; Dalquen, Daniel A; DeLuca, Todd; Forslund, Kristoffer; Huerta-Cepas, Jaime; Linard, Benjamin; Pereira, Cécile; Pryszcz, Leszek P; Schreiber, Fabian; da Silva, Alan Sousa; Szklarczyk, Damian; Train, Clément-Marie; Bork, Peer; Lecompte, Odile; von Mering, Christian; Xenarios, Ioannis; Sjölander, Kimmen; Jensen, Lars Juhl; Martin, Maria J; Muffato, Matthieu; Gabaldón, Toni; Lewis, Suzanna E; Thomas, Paul D; Sonnhammer, Erik; Dessimoz, Christophe

    2016-05-01

    Achieving high accuracy in orthology inference is essential for many comparative, evolutionary and functional genomic analyses, yet the true evolutionary history of genes is generally unknown and orthologs are used for very different applications across phyla, requiring different precision-recall trade-offs. As a result, it is difficult to assess the performance of orthology inference methods. Here, we present a community effort to establish standards and an automated web-based service to facilitate orthology benchmarking. Using this service, we characterize 15 well-established inference methods and resources on a battery of 20 different benchmarks. Standardized benchmarking provides a way for users to identify the most effective methods for the problem at hand, sets a minimum requirement for new tools and resources, and guides the development of more accurate orthology inference methods. PMID:27043882

  9. NAS Parallel Benchmarks, Multi-Zone Versions

    NASA Technical Reports Server (NTRS)

    vanderWijngaart, Rob F.; Haopiang, Jin

    2003-01-01

    We describe an extension of the NAS Parallel Benchmarks (NPB) suite that involves solving the application benchmarks LU, BT and SP on collections of loosely coupled discretization meshes. The solutions on the meshes are updated independently, but after each time step they exchange boundary value information. This strategy, which is common among structured-mesh production flow solver codes in use at NASA Ames and elsewhere, provides relatively easily exploitable coarse-grain parallelism between meshes. Since the individual application benchmarks also allow fine-grain parallelism themselves, this NPB extension, named NPB Multi-Zone (NPB-MZ), is a good candidate for testing hybrid and multi-level parallelization tools and strategies.

  10. Standardized benchmarking in the quest for orthologs.

    PubMed

    Altenhoff, Adrian M; Boeckmann, Brigitte; Capella-Gutierrez, Salvador; Dalquen, Daniel A; DeLuca, Todd; Forslund, Kristoffer; Huerta-Cepas, Jaime; Linard, Benjamin; Pereira, Cécile; Pryszcz, Leszek P; Schreiber, Fabian; da Silva, Alan Sousa; Szklarczyk, Damian; Train, Clément-Marie; Bork, Peer; Lecompte, Odile; von Mering, Christian; Xenarios, Ioannis; Sjölander, Kimmen; Jensen, Lars Juhl; Martin, Maria J; Muffato, Matthieu; Gabaldón, Toni; Lewis, Suzanna E; Thomas, Paul D; Sonnhammer, Erik; Dessimoz, Christophe

    2016-05-01

    Achieving high accuracy in orthology inference is essential for many comparative, evolutionary and functional genomic analyses, yet the true evolutionary history of genes is generally unknown and orthologs are used for very different applications across phyla, requiring different precision-recall trade-offs. As a result, it is difficult to assess the performance of orthology inference methods. Here, we present a community effort to establish standards and an automated web-based service to facilitate orthology benchmarking. Using this service, we characterize 15 well-established inference methods and resources on a battery of 20 different benchmarks. Standardized benchmarking provides a way for users to identify the most effective methods for the problem at hand, sets a minimum requirement for new tools and resources, and guides the development of more accurate orthology inference methods.

  11. Toxicological benchmarks for wildlife: 1996 Revision

    SciTech Connect

    Sample, B.E.; Opresko, D.M.; Suter, G.W., II

    1996-06-01

    The purpose of this report is to present toxicological benchmarks for assessment of effects of certain chemicals on mammalian and avian wildlife species. Publication of this document meets a milestone for the Environmental Restoration (ER) Risk Assessment Program. This document provides the ER Program with toxicological benchmarks that may be used as comparative tools in screening assessments as well as lines of evidence to support or refute the presence of ecological effects in ecological risk assessments. The chemicals considered in this report are some that occur at US DOE waste sites, and the wildlife species evaluated herein were chosen because they represent a range of body sizes and diets.

  12. Benchmarks for the point kinetics equations

    SciTech Connect

    Ganapol, B.; Picca, P.; Previti, A.; Mostacci, D.

    2013-07-01

    A new numerical algorithm is presented for the solution to the point kinetics equations (PKEs), whose accurate solution has been sought for over 60 years. The method couples the simplest of finite difference methods, a backward Euler, with Richardsons extrapolation, also called an acceleration. From this coupling, a series of benchmarks have emerged. These include cases from the literature as well as several new ones. The novelty of this presentation lies in the breadth of reactivity insertions considered, covering both prescribed and feedback reactivities, and the extreme 8- to 9- digit accuracy achievable. The benchmarks presented are to provide guidance to those who wish to develop further numerical improvements. (authors)

  13. Benchmark testing of {sup 233}U evaluations

    SciTech Connect

    Wright, R.Q.; Leal, L.C.

    1997-07-01

    In this paper we investigate the adequacy of available {sup 233}U cross-section data (ENDF/B-VI and JENDL-3) for calculation of critical experiments. An ad hoc revised {sup 233}U evaluation is also tested and appears to give results which are improved relative to those obtained with either ENDF/B-VI or JENDL-3 cross sections. Calculations of k{sub eff} were performed for ten fast benchmarks and six thermal benchmarks using the three cross-section sets. Central reaction-rate-ratio calculations were also performed.

  14. Los Alamos National Laboratory computer benchmarking 1982

    SciTech Connect

    Martin, J.L.

    1983-06-01

    Evaluating the performance of computing machinery is a continual effort of the Computer Research and Applications Group of the Los Alamos National Laboratory. This report summarizes the results of the group's benchmarking activities performed between October 1981 and September 1982, presenting compilation and execution times as well as megaflop rates for a set of benchmark codes. Tests were performed on the following computers: Cray Research, Inc. (CRI) Cray-1S; Control Data Corporation (CDC) 7600, 6600, Cyber 73, Cyber 825, Cyber 835, Cyber 855, and Cyber 205; Digital Equipment Corporation (DEC) VAX 11/780 and VAX 11/782; and Apollo Computer, Inc., Apollo.

  15. Two-dimensional benchmark calculations for PNL-30 through PNL-35

    SciTech Connect

    Mosteller, R.D.

    1997-09-01

    Interest in critical experiments with lattices of mixed-oxide (MOX) fuel pins has been revived by the possibility that light water reactors will be used for disposition of weapons-grade plutonium. A series of six experiments with MOX lattices, designated PNL-30 through PNL-35, was performed at Pacific Northwest Laboratories in 1975 and 1976, and a set of benchmark specifications for these experiments subsequently was adopted by the Cross Section Evaluation Working Group (CSEWG). Although there appear to be some problems with these experiments, they remain the only CSEWG benchmarks for MOX lattices. The number of fuel pins in these experiments is relatively low, corresponding to fewer than 4 typical pressurized-water-reactor fuel assemblies. Accordingly, they are more appropriate as benchmarks for lattice-physics codes than for reactor-core simulator codes. Unfortunately, the CSEWG specifications retain the full three-dimensional (3D) detail of the experiments, while lattice-physics codes almost universally are limited to two dimensions (2D). This paper proposes an extension of the benchmark specifications to include a 2D model, and it justifies that extension by comparing results from the MCNP Monte Carlo code for the 2D and 3D specifications.

  16. Overview of TPC Benchmark E: The Next Generation of OLTP Benchmarks

    NASA Astrophysics Data System (ADS)

    Hogan, Trish

    Set to replace the aging TPC-C, the TPC Benchmark E is the next generation OLTP benchmark, which more accurately models client database usage. TPC-E addresses the shortcomings of TPC-C. It has a much more complex workload, requires the use of RAID-protected storage, generates much less I/O, and is much cheaper and easier to set up, run, and audit. After a period of overlap, it is expected that TPC-E will become the de facto OLTP benchmark.

  17. Algebra for All: California’s Eighth-Grade Algebra Initiative as Constrained Curricula

    PubMed Central

    Domina, Thurston; Penner, Andrew M.; Penner, Emily K.; Conley, Annemarie

    2015-01-01

    Background/Context Across the United States, secondary school curricula are intensifying as a growing proportion of students enroll in high-level academic math courses. In many districts, this intensification process occurs as early as eighth grade, where schools are effectively constraining their mathematics curricula by restricting course offerings and placing more students into Algebra I. This paper provides a quantitative single-case research study of policy-driven curricular intensification in one California school district. Research Questions (1a) What effect did 8th eighth grade curricular intensification have on mathematics course enrollment patterns in Towering Pines Unified schools? (2b) How did the distribution of prior achievement in Towering Pines math classrooms change as the district constrained the curriculum by universalizing 8th eighth grade Algebra? (3c) Did 8th eighth grade curricular intensification improve students’ mathematics achievement? Setting Towering Pines is an immigrant enclave in the inner-ring suburbs of a major metropolitan area. The district’s 10 middle schools together enroll approximately 4,000 eighth graders each year. The districts’ students are ethnically diverse and largely economically disadvantaged. The study draws upon administrative data describing 8th eighth graders in the district in the 2004–20-05 through 2007–20-08 school years. Intervention/Program/Practice During the study period, Towering Pines dramatically intensified middle school students’ math curricula: In the 2004–20-05 school year 32% of the district’s 8th eighth graders enrolled in Algebra or a higher- level mathematics course; by the 2007–20-08 school year that proportion had increased to 84%. Research Design We use an interrupted time-series design, comparing students’ 8th eighth grade math course enrollments, 10th grade math course enrollments, and 10th grade math test scores across the four cohorts, controlling for demographics and

  18. NAS Parallel Benchmarks Results 3-95

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Bailey, David H.; Walter, Howard (Technical Monitor)

    1995-01-01

    The NAS Parallel Benchmarks (NPB) were developed in 1991 at NASA Ames Research Center to study the performance of parallel supercomputers. The eight benchmark problems are specified in a "pencil and paper" fashion, i.e., the complete details of the problem are given in a NAS technical document. Except for a few restrictions, benchmark implementors are free to select the language constructs and implementation techniques best suited for a particular system. In this paper, we present new NPB performance results for the following systems: (a) Parallel-Vector Processors: CRAY C90, CRAY T90 and Fujitsu VPP500; (b) Highly Parallel Processors: CRAY T3D, IBM SP2-WN (Wide Nodes), and IBM SP2-TN2 (Thin Nodes 2); and (c) Symmetric Multiprocessors: Convex Exemplar SPPIOOO, CRAY J90, DEC Alpha Server 8400 5/300, and SGI Power Challenge XL (75 MHz). We also present sustained performance per dollar for Class B LU, SP and BT benchmarks. We also mention future NAS plans for the NPB.

  19. Canadian Language Benchmarks 2000: Theoretical Framework.

    ERIC Educational Resources Information Center

    Pawlikowska-Smith, Grazyna

    This document provides indepth study and support of the "Canadian Language Benchmarks 2000" (CLB 2000). In order to make the CLB 2000 usable, the competencies and standards were considerably compressed and simplified, and much of the indepth discussion of language ability or proficiency was omitted, at publication. This document includes: (1)…

  20. Benchmarking 2011: Trends in Education Philanthropy

    ERIC Educational Resources Information Center

    Grantmakers for Education, 2011

    2011-01-01

    The analysis in "Benchmarking 2011" is based on data from an unduplicated sample of 184 education grantmaking organizations--approximately two-thirds of Grantmakers for Education's (GFE's) network of grantmakers--who responded to an online survey consisting of fixed-choice and open-ended questions. Because a different subset of funders elects to…

  1. Benchmarking 2010: Trends in Education Philanthropy

    ERIC Educational Resources Information Center

    Bearman, Jessica

    2010-01-01

    "Benchmarking 2010" offers insights into the current priorities, practices and concerns of education grantmakers. The report is divided into five sections: (1) Mapping the Education Grantmaking Landscape; (2) 2010 Funding Priorities; (3) Strategies for Leveraging Greater Impact; (4) Identifying Significant Trends in Education Funding; and (5)…

  2. What Is the Impact of Subject Benchmarking?

    ERIC Educational Resources Information Center

    Pidcock, Steve

    2006-01-01

    The introduction of subject benchmarking led to fears of increased external intervention in the activities of universities and a more restrictive view of institutional autonomy, accompanied by an undermining of the academic profession, particularly through the perceived threat of the introduction of a national curriculum for higher education. For…

  3. Benchmarking Year Five Students' Reading Abilities

    ERIC Educational Resources Information Center

    Lim, Chang Kuan; Eng, Lin Siew; Mohamed, Abdul Rashid

    2014-01-01

    Reading and understanding a written text is one of the most important skills in English learning.This study attempts to benchmark Year Five students' reading abilities of fifteen rural schools in a district in Malaysia. The objectives of this study are to develop a set of standardised written reading comprehension and a set of indicators to inform…

  4. Issues in Benchmarking and Assessing Institutional Engagement

    ERIC Educational Resources Information Center

    Furco, Andrew; Miller, William

    2009-01-01

    The process of assessing and benchmarking community engagement can take many forms. To date, more than two dozen assessment tools for measuring community engagement institutionalization have been published. These tools vary substantially in purpose, level of complexity, scope, process, structure, and focus. While some instruments are designed to…

  5. Simple benchmark for complex dose finding studies.

    PubMed

    Cheung, Ying Kuen

    2014-06-01

    While a general goal of early phase clinical studies is to identify an acceptable dose for further investigation, modern dose finding studies and designs are highly specific to individual clinical settings. In addition, as outcome-adaptive dose finding methods often involve complex algorithms, it is crucial to have diagnostic tools to evaluate the plausibility of a method's simulated performance and the adequacy of the algorithm. In this article, we propose a simple technique that provides an upper limit, or a benchmark, of accuracy for dose finding methods for a given design objective. The proposed benchmark is nonparametric optimal in the sense of O'Quigley et al. (2002, Biostatistics 3, 51-56), and is demonstrated by examples to be a practical accuracy upper bound for model-based dose finding methods. We illustrate the implementation of the technique in the context of phase I trials that consider multiple toxicities and phase I/II trials where dosing decisions are based on both toxicity and efficacy, and apply the benchmark to several clinical examples considered in the literature. By comparing the operating characteristics of a dose finding method to that of the benchmark, we can form quick initial assessments of whether the method is adequately calibrated and evaluate its sensitivity to the dose-outcome relationships.

  6. Quality Benchmarks in Undergraduate Psychology Programs

    ERIC Educational Resources Information Center

    Dunn, Dana S.; McCarthy, Maureen A.; Baker, Suzanne; Halonen, Jane S.; Hill, G. William, IV

    2007-01-01

    Performance benchmarks are proposed to assist undergraduate psychology programs in defining their missions and goals as well as documenting their effectiveness. Experienced academic program reviewers compared their experiences to formulate a developmental framework of attributes of undergraduate programs focusing on activity in 8 domains:…

  7. Benchmark Generation using Domain Specific Modeling

    SciTech Connect

    Bui, Ngoc B.; Zhu, Liming; Gorton, Ian; Liu, Yan

    2007-08-01

    Performance benchmarks are domain specific applications that are specialized to a certain set of technologies and platforms. The development of a benchmark application requires mapping the performance specific domain concepts to an implementation and producing complex technology and platform specific code. Domain Specific Modeling (DSM) promises to bridge the gap between application domains and implementations by allowing designers to specify solutions in domain-specific abstractions and semantics through Domain Specific Languages (DSL). This allows generation of a final implementation automatically from high level models. The modeling and task automation benefits obtained from this approach usually justify the upfront cost involved. This paper employs a DSM based approach to invent a new DSL, DSLBench, for benchmark generation. DSLBench and its associated code generation facilities allow the design and generation of a completely deployable benchmark application for performance testing from a high level model. DSLBench is implemented using Microsoft Domain Specific Language toolkit. It is integrated with the Visual Studio 2005 Team Suite as a plug-in to provide extra modeling capabilities for performance testing. We illustrate the approach using a case study based on .Net and C#.

  8. 2010 Recruiting Benchmarks Survey. Research Brief

    ERIC Educational Resources Information Center

    National Association of Colleges and Employers (NJ1), 2010

    2010-01-01

    The National Association of Colleges and Employers conducted its annual survey of employer members from June 15, 2010 to August 15, 2010, to benchmark data relevant to college recruiting. From a base of 861 employers holding organizational membership, there were 268 responses for a response rate of 31 percent. Following are some of the major…

  9. MHEC Survey Establishes Midwest Property Insurance Benchmarks.

    ERIC Educational Resources Information Center

    Midwestern Higher Education Commission Risk Management Institute Research Bulletin, 1994

    1994-01-01

    This publication presents the results of a survey of over 200 midwestern colleges and universities on their property insurance programs and establishes benchmarks to help these institutions evaluate their insurance programs. Findings included the following: (1) 51 percent of respondents currently purchase their property insurance as part of a…

  10. Standardised Benchmarking in the Quest for Orthologs

    PubMed Central

    Altenhoff, Adrian M.; Boeckmann, Brigitte; Capella-Gutierrez, Salvador; Dalquen, Daniel A.; DeLuca, Todd; Forslund, Kristoffer; Huerta-Cepas, Jaime; Linard, Benjamin; Pereira, Cécile; Pryszcz, Leszek P.; Schreiber, Fabian; Sousa da Silva, Alan; Szklarczyk, Damian; Train, Clément-Marie; Bork, Peer; Lecompte, Odile; von Mering, Christian; Xenarios, Ioannis; Sjölander, Kimmen; Juhl Jensen, Lars; Martin, Maria J.; Muffato, Matthieu; Gabaldón, Toni; Lewis, Suzanna E.; Thomas, Paul D.; Sonnhammer, Erik; Dessimoz, Christophe

    2016-01-01

    The identification of evolutionarily related genes across different species—orthologs in particular—forms the backbone of many comparative, evolutionary, and functional genomic analyses. Achieving high accuracy in orthology inference is thus essential. Yet the true evolutionary history of genes, required to ascertain orthology, is generally unknown. Furthermore, orthologs are used for very different applications across different phyla, with different requirements in terms of the precision-recall trade-off. As a result, assessing the performance of orthology inference methods remains difficult for both users and method developers. Here, we present a community effort to establish standards in orthology benchmarking and facilitate orthology benchmarking through an automated web-based service (http://orthology.benchmarkservice.org). Using this new service, we characterise the performance of 15 well-established orthology inference methods and resources on a battery of 20 different benchmarks. Standardised benchmarking provides a way for users to identify the most effective methods for the problem at hand, sets a minimal requirement for new tools and resources, and guides the development of more accurate orthology inference methods. PMID:27043882

  11. Administrative benchmarks for Medicare, Medicaid HMOs.

    PubMed

    1998-11-01

    Plus, check out benchmark data on Medicare and Medicaid administrative costs. Every provider knows that HMOs take a slice of the Medicare or Medicaid premium for their administrative costs before they determine provider capitation. But how much does administration really cost? Here's some PMPM data from a study by the Sherlock Company.

  12. Benchmarking Peer Production Mechanisms, Processes & Practices

    ERIC Educational Resources Information Center

    Fischer, Thomas; Kretschmer, Thomas

    2008-01-01

    This deliverable identifies key approaches for quality management in peer production by benchmarking peer production practices and processes in other areas. (Contains 29 footnotes, 13 figures and 2 tables.)[This report has been authored with contributions of: Kaisa Honkonen-Ratinen, Matti Auvinen, David Riley, Jose Pinzon, Thomas Fischer, Thomas…

  13. Benchmark Generation and Simulation at Extreme Scale

    SciTech Connect

    Lagadapati, Mahesh; Mueller, Frank; Engelmann, Christian

    2016-01-01

    The path to extreme scale high-performance computing (HPC) poses several challenges related to power, performance, resilience, productivity, programmability, data movement, and data management. Investigating the performance of parallel applications at scale on future architectures and the performance impact of different architectural choices is an important component of HPC hardware/software co-design. Simulations using models of future HPC systems and communication traces from applications running on existing HPC systems can offer an insight into the performance of future architectures. This work targets technology developed for scalable application tracing of communication events. It focuses on extreme-scale simulation of HPC applications and their communication behavior via lightweight parallel discrete event simulation for performance estimation and evaluation. Instead of simply replaying a trace within a simulator, this work promotes the generation of a benchmark from traces. This benchmark is subsequently exposed to simulation using models to reflect the performance characteristics of future-generation HPC systems. This technique provides a number of benefits, such as eliminating the data intensive trace replay and enabling simulations at different scales. The presented work features novel software co-design aspects, combining the ScalaTrace tool to generate scalable trace files, the ScalaBenchGen tool to generate the benchmark, and the xSim tool to assess the benchmark characteristics within a simulator.

  14. Benchmark Problems for Spacecraft Formation Flying Missions

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell; Leitner, Jesse A.; Burns, Richard D.; Folta, David C.

    2003-01-01

    To provide high-level focus to distributed space system flight dynamics and control research, several benchmark problems are suggested. These problems are not specific to any current or proposed mission, but instead are intended to capture high-level features that would be generic to many similar missions.

  15. Benchmarking in Universities: League Tables Revisited

    ERIC Educational Resources Information Center

    Turner, David

    2005-01-01

    This paper examines the practice of benchmarking universities using a "league table" approach. Taking the example of the "Sunday Times University League Table", the author reanalyses the descriptive data on UK universities. Using a linear programming technique, data envelope analysis (DEA), the author uses the re-analysis to demonstrate the major…

  16. Finding a benchmark for monitoring hospital cleanliness.

    PubMed

    Mulvey, D; Redding, P; Robertson, C; Woodall, C; Kingsmore, P; Bedwell, D; Dancer, S J

    2011-01-01

    This study evaluated three methods for monitoring hospital cleanliness. The aim was to find a benchmark that could indicate risk to patients from a contaminated environment. We performed visual monitoring, ATP bioluminescence and microbiological screening of five clinical surfaces before and after detergent-based cleaning on two wards over a four-week period. Five additional sites that were not featured in the routine domestic specification were also sampled. Measurements from all three methods were integrated and compared in order to choose appropriate levels for routine monitoring. We found that visual assessment did not reflect ATP values nor environmental contamination with microbial flora including Staphylococcus aureus and meticillin-resistant S. aureus (MRSA). There was a relationship between microbial growth categories and the proportion of ATP values exceeding a chosen benchmark but neither reliably predicted the presence of S. aureus or MRSA. ATP values were occasionally diverse. Detergent-based cleaning reduced levels of organic soil by 32% (95% confidence interval: 16-44%; P<0.001) but did not necessarily eliminate indicator staphylococci, some of which survived the cleaning process. An ATP benchmark value of 100 relative light units offered the closest correlation with microbial growth levels <2.5 cfu/cm(2) (receiver operating characteristic ROC curve sensitivity: 57%; specificity: 57%). In conclusion, microbiological and ATP monitoring confirmed environmental contamination, persistence of hospital pathogens and measured the effect on the environment from current cleaning practices. This study has provided provisional benchmarks to assist with future assessment of hospital cleanliness. Further work is required to refine practical sampling strategy and choice of benchmarks. PMID:21129820

  17. VENUS-2 MOX Core Benchmark: Results of ORNL Calculations Using HELIOS-1.4 - Revised Report

    SciTech Connect

    Ellis, RJ

    2001-06-01

    The Task Force on Reactor-Based Plutonium Disposition (TFRPD) was formed by the Organization for Economic Cooperation and Development/Nuclear Energy Agency (OECD/NEA) to study reactor physics, fuel performance, and fuel cycle issues related to the disposition of weapons-grade (WG) plutonium as mixed-oxide (MOX) reactor fuel. To advance the goals of the TFRPD, 10 countries and 12 institutions participated in a major TFRPD activity: a blind benchmark study to compare code calculations to experimental data for the VENUS-2 MOX core at SCK-CEN in Mol, Belgium. At Oak Ridge National Laboratory, the HELIOS-1.4 code system was used to perform the comprehensive study of pin-cell and MOX core calculations for the VENUS-2 MOX core benchmark study.

  18. Win That Job! 10th Anniversary Edition.

    ERIC Educational Resources Information Center

    Stevens, Paul

    This book provides practical information on obtaining a job. Though it is published in Australia, 11 chapters introduce a universal range of job search methods, presenting: the importance of goals and self-knowledge; the resume; preparing job search correspondence; the interview; self-promotion; job search tips and unusual strategies; networking;…

  19. 10th Anniversary P.S.

    ScienceCinema

    None

    2016-07-12

    John Adams parle de la préhistoire du P.S. avec présentation des dias. Le DG B.Gregory prend la parole. Les organisateurs présentent sous la direction du "Prof.Ocktette"(?) un sketch très humoristique (p.e.existence de Quark etc.....)

  20. 10th Anniversary P.S.

    SciTech Connect

    2005-10-28

    John Adams parle de la préhistoire du P.S. avec présentation des dias. Le DG B.Gregory prend la parole. Les organisateurs présentent sous la direction du "Prof.Ocktette"(?) un sketch très humoristique (p.e.existence de Quark etc.....)

  1. Highlights of 10th plasma chemistry meeting

    NASA Technical Reports Server (NTRS)

    Kitamura, K.; Hashimoto, H.; Hozumi, K.

    1981-01-01

    The chemical structure is given of a film formed by plasma polymerization from pyridine monomers. The film has a hydrophilic chemical structure, its molecular weight is 900, and the molecular system is C55H50N10O3. The electrical characteristics of a plasma polymerized film are described. The film has good insulating properties and was successfully applied as video disc coating. Etching resistance properties make it possible to use the film as a resist in etching. The characteristics of plasma polymer formed from monomers containing tetramethyltin are discussed. The polymer is in film form, displays good adhesiveness, is similar to UV film UV 35 in light absorption and is highly insulating.

  2. 10th Annual School Construction Report, 2005

    ERIC Educational Resources Information Center

    Abramson, Paul

    2005-01-01

    School construction in the United States dipped below $20 billion in 2003, the first time that had happened in the 21st Century, setting off alarm bells that the school construction boom might be fading. That concern appears to be unfounded. In 2004, school districts in the United States once again completed more than $20 billion worth of…

  3. Benchmark 1 - Failure Prediction after Cup Drawing, Reverse Redrawing and Expansion Part A: Benchmark Description

    NASA Astrophysics Data System (ADS)

    Watson, Martin; Dick, Robert; Huang, Y. Helen; Lockley, Andrew; Cardoso, Rui; Santos, Abel

    2016-08-01

    This Benchmark is designed to predict the fracture of a food can after drawing, reverse redrawing and expansion. The aim is to assess different sheet metal forming difficulties such as plastic anisotropic earing and failure models (strain and stress based Forming Limit Diagrams) under complex nonlinear strain paths. To study these effects, two distinct materials, TH330 steel (unstoved) and AA5352 aluminum alloy are considered in this Benchmark. Problem description, material properties, and simulation reports with experimental data are summarized.

  4. Using Benchmarking To Influence Tuition and Fee Decisions.

    ERIC Educational Resources Information Center

    Hubbell, Loren W. Loomis; Massa, Robert J.; Lapovsky, Lucie

    2002-01-01

    Discusses the use of benchmarking in managing enrollment. Using a case study, illustrates how benchmarking can help administrators develop strategies for planning and implementing admissions and pricing practices. (EV)

  5. 42 CFR 425.602 - Establishing the benchmark.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... computing an ACO's fixed historical benchmark that is adjusted for historical growth and beneficiary... program. (2) Makes separate expenditure calculations for each of the following populations of... making up the historical benchmark, determines national growth rates and trends expenditures for...

  6. 42 CFR 425.602 - Establishing the benchmark.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... computing an ACO's fixed historical benchmark that is adjusted for historical growth and beneficiary... program. (2) Makes separate expenditure calculations for each of the following populations of... making up the historical benchmark, determines national growth rates and trends expenditures for...

  7. The Impact Hydrocode Benchmark and Validation Project: Initial Results

    NASA Astrophysics Data System (ADS)

    Pierazzo, E.; Artemieva, N.; Asphaug, E.; Cazamias, J.; Coker, R.; Collins, G. S.; Gisler, G.; Holsapple, K. A.; Housen, K. R.; Ivanov, B.; Johnson, C.; Korycansky, D. G.; Melosh, H. J.; Taylor, E. A.; Turtle, E. P.; Wünnemann, K.

    2007-03-01

    This work presents initial results of a validation and benchmarking effort from the impact cratering and explosion community. Several impact codes routinely used to model impact and explosion events are being compared using simple benchmark tests.

  8. Gleason grading system

    MedlinePlus

    ... medlineplus.gov/ency/patientinstructions/000920.htm Gleason grading system To use the sharing features on this page, ... score of between 5 and 7. Gleason Grading System Sometimes, it can be hard to predict how ...

  9. Results Oriented Benchmarking: The Evolution of Benchmarking at NASA from Competitive Comparisons to World Class Space Partnerships

    NASA Technical Reports Server (NTRS)

    Bell, Michael A.

    1999-01-01

    Informal benchmarking using personal or professional networks has taken place for many years at the Kennedy Space Center (KSC). The National Aeronautics and Space Administration (NASA) recognized early on, the need to formalize the benchmarking process for better utilization of resources and improved benchmarking performance. The need to compete in a faster, better, cheaper environment has been the catalyst for formalizing these efforts. A pioneering benchmarking consortium was chartered at KSC in January 1994. The consortium known as the Kennedy Benchmarking Clearinghouse (KBC), is a collaborative effort of NASA and all major KSC contractors. The charter of this consortium is to facilitate effective benchmarking, and leverage the resulting quality improvements across KSC. The KBC acts as a resource with experienced facilitators and a proven process. One of the initial actions of the KBC was to develop a holistic methodology for Center-wide benchmarking. This approach to Benchmarking integrates the best features of proven benchmarking models (i.e., Camp, Spendolini, Watson, and Balm). This cost-effective alternative to conventional Benchmarking approaches has provided a foundation for consistent benchmarking at KSC through the development of common terminology, tools, and techniques. Through these efforts a foundation and infrastructure has been built which allows short duration benchmarking studies yielding results gleaned from world class partners that can be readily implemented. The KBC has been recognized with the Silver Medal Award (in the applied research category) from the International Benchmarking Clearinghouse.

  10. The Case against Grades

    ERIC Educational Resources Information Center

    Kohn, Alfie

    2011-01-01

    Decades of research shows that grades diminish students' interest in whatever they're learning, discourage students from taking academic risks, and reduce the quality of students' thinking, writes Kohn. Contrary to what many people assume, grades are not necessary to promote achievement. Attempts to "improve" grading--such as standards-based…

  11. Differential Grading Standards Revisited.

    ERIC Educational Resources Information Center

    Strenta, A. Christopher; Elliott, Rogers

    1987-01-01

    Differential grading standards were examined in a sample of 1,029 Dartmouth College graduates. Fields of study that attracted students (as majors) who scored higher on the Scholastic Aptitude Test (SAT) employed stricter grading standards. These differential standards attenuated the substantial correlation between SAT scores and grades.…

  12. [Grading of prostate cancer].

    PubMed

    Kristiansen, G; Roth, W; Helpap, B

    2016-07-01

    The current grading of prostate cancer is based on the classification system of the International Society of Urological Pathology (ISUP) following a consensus conference in Chicago in 2014. The foundations are based on the frequently modified grading system of Gleason. This article presents a brief description of the development to the current ISUP grading system. PMID:27393141

  13. Bias in Grading

    ERIC Educational Resources Information Center

    Malouff, John

    2008-01-01

    Bias in grading can be conscious or unconscious. The author describes different types of bias, such as those based on student attractiveness or performance in prior courses, and a variety of methods of reducing bias, including keeping students anonymous during grading and using detailed criteria for subjective grading.

  14. Redesigning Grading--Districtwide

    ERIC Educational Resources Information Center

    Townsley, Matt

    2014-01-01

    In the first years of his career as a high school math teacher, Matt Townsley was bothered by the fact that his grades penalized students for not learning content quickly. A student could master every standard, but low quiz grades and homework assignments they didn't complete because they didn't understand would lower their final grade,…

  15. 29 CFR 1952.163 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.163 Section 1952.163... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  16. 29 CFR 1952.113 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.113 Section 1952.113... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  17. 29 CFR 1952.213 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.213 Section 1952.213... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  18. 29 CFR 1952.343 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.343 Section 1952.343... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, Compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  19. 29 CFR 1952.373 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.373 Section 1952.373... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  20. 29 CFR 1952.93 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.93 Section 1952.93....93 Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were...

  1. 29 CFR 1952.353 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.353 Section 1952.353... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  2. 29 CFR 1952.233 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.233 Section 1952.233... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  3. 29 CFR 1952.223 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.223 Section 1952.223... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  4. 29 CFR 1952.293 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.293 Section 1952.293... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  5. 47 CFR 69.108 - Transport rate benchmark.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Transport rate benchmark. 69.108 Section 69.108... Computation of Charges § 69.108 Transport rate benchmark. (a) For transport charges computed in accordance with this subpart, the DS3-to-DS1 benchmark ratio shall be calculated as follows: the telephone...

  6. 29 CFR 1952.323 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Compliance staffing benchmarks. 1952.323 Section 1952.323... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  7. Discovering and Implementing Best Practices to Strengthen SEAs: Collaborative Benchmarking

    ERIC Educational Resources Information Center

    Building State Capacity and Productivity Center, 2013

    2013-01-01

    This paper is written for state educational agency (SEA) leaders who are considering the benefits of collaborative benchmarking, and it addresses the following questions: (1) What does benchmarking of best practices entail?; (2) How does "collaborative benchmarking" enhance the process?; (3) How do SEAs control the process so that "their" needs…

  8. 29 CFR 1952.163 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 9 2011-07-01 2011-07-01 false Compliance staffing benchmarks. 1952.163 Section 1952.163... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  9. 29 CFR 1952.93 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 9 2013-07-01 2013-07-01 false Compliance staffing benchmarks. 1952.93 Section 1952.93....93 Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were...

  10. 29 CFR 1952.323 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 9 2012-07-01 2012-07-01 false Compliance staffing benchmarks. 1952.323 Section 1952.323... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  11. 29 CFR 1952.353 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 9 2012-07-01 2012-07-01 false Compliance staffing benchmarks. 1952.353 Section 1952.353... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  12. 29 CFR 1952.353 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 9 2013-07-01 2013-07-01 false Compliance staffing benchmarks. 1952.353 Section 1952.353... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  13. 29 CFR 1952.113 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 9 2013-07-01 2013-07-01 false Compliance staffing benchmarks. 1952.113 Section 1952.113... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  14. 29 CFR 1952.223 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 9 2013-07-01 2013-07-01 false Compliance staffing benchmarks. 1952.223 Section 1952.223... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  15. 29 CFR 1952.353 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 9 2014-07-01 2014-07-01 false Compliance staffing benchmarks. 1952.353 Section 1952.353... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  16. 29 CFR 1952.323 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 9 2013-07-01 2013-07-01 false Compliance staffing benchmarks. 1952.323 Section 1952.323... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  17. 29 CFR 1952.213 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 9 2012-07-01 2012-07-01 false Compliance staffing benchmarks. 1952.213 Section 1952.213... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  18. 29 CFR 1952.293 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 9 2014-07-01 2014-07-01 false Compliance staffing benchmarks. 1952.293 Section 1952.293... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  19. 45 CFR 156.100 - State selection of benchmark.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 45 Public Welfare 1 2014-10-01 2014-10-01 false State selection of benchmark. 156.100 Section 156... Essential Health Benefits Package § 156.100 State selection of benchmark. Each State may identify a single EHB-benchmark plan according to the selection criteria described below: (a) State selection of...

  20. 29 CFR 1952.343 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 9 2011-07-01 2011-07-01 false Compliance staffing benchmarks. 1952.343 Section 1952.343... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, Compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  1. 29 CFR 1952.373 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 9 2011-07-01 2011-07-01 false Compliance staffing benchmarks. 1952.373 Section 1952.373... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  2. 29 CFR 1952.293 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 9 2012-07-01 2012-07-01 false Compliance staffing benchmarks. 1952.293 Section 1952.293... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  3. 29 CFR 1952.113 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 9 2014-07-01 2014-07-01 false Compliance staffing benchmarks. 1952.113 Section 1952.113... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  4. 29 CFR 1952.233 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 9 2014-07-01 2014-07-01 false Compliance staffing benchmarks. 1952.233 Section 1952.233... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  5. 29 CFR 1952.343 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 9 2013-07-01 2013-07-01 false Compliance staffing benchmarks. 1952.343 Section 1952.343... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, Compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  6. 29 CFR 1952.293 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 9 2013-07-01 2013-07-01 false Compliance staffing benchmarks. 1952.293 Section 1952.293... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  7. 29 CFR 1952.93 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 9 2011-07-01 2011-07-01 false Compliance staffing benchmarks. 1952.93 Section 1952.93....93 Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were...

  8. 29 CFR 1952.323 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 9 2011-07-01 2011-07-01 false Compliance staffing benchmarks. 1952.323 Section 1952.323... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  9. 41 CFR 60-300.45 - Benchmarks for hiring.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 41 Public Contracts and Property Management 1 2014-07-01 2014-07-01 false Benchmarks for hiring... VETERANS, AND ARMED FORCES SERVICE MEDAL VETERANS Affirmative Action Program § 60-300.45 Benchmarks for hiring. The benchmark is not a rigid and inflexible quota which must be met, nor is it to be...

  10. 47 CFR 69.108 - Transport rate benchmark.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 3 2012-10-01 2012-10-01 false Transport rate benchmark. 69.108 Section 69.108... Computation of Charges § 69.108 Transport rate benchmark. (a) For transport charges computed in accordance with this subpart, the DS3-to-DS1 benchmark ratio shall be calculated as follows: the telephone...

  11. 47 CFR 69.108 - Transport rate benchmark.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 3 2011-10-01 2011-10-01 false Transport rate benchmark. 69.108 Section 69.108... Computation of Charges § 69.108 Transport rate benchmark. (a) For transport charges computed in accordance with this subpart, the DS3-to-DS1 benchmark ratio shall be calculated as follows: the telephone...

  12. 29 CFR 1952.93 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 9 2012-07-01 2012-07-01 false Compliance staffing benchmarks. 1952.93 Section 1952.93....93 Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were...

  13. 29 CFR 1952.343 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 9 2014-07-01 2014-07-01 false Compliance staffing benchmarks. 1952.343 Section 1952.343... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, Compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  14. 29 CFR 1952.113 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 9 2011-07-01 2011-07-01 false Compliance staffing benchmarks. 1952.113 Section 1952.113... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  15. 29 CFR 1952.213 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 9 2014-07-01 2014-07-01 false Compliance staffing benchmarks. 1952.213 Section 1952.213... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  16. 29 CFR 1952.373 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 9 2013-07-01 2013-07-01 false Compliance staffing benchmarks. 1952.373 Section 1952.373... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  17. 29 CFR 1952.93 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 9 2014-07-01 2014-07-01 false Compliance staffing benchmarks. 1952.93 Section 1952.93....93 Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were...

  18. 47 CFR 69.108 - Transport rate benchmark.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 3 2014-10-01 2014-10-01 false Transport rate benchmark. 69.108 Section 69.108... Computation of Charges § 69.108 Transport rate benchmark. (a) For transport charges computed in accordance with this subpart, the DS3-to-DS1 benchmark ratio shall be calculated as follows: the telephone...

  19. 29 CFR 1952.233 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 9 2011-07-01 2011-07-01 false Compliance staffing benchmarks. 1952.233 Section 1952.233... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  20. 29 CFR 1952.233 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 9 2012-07-01 2012-07-01 false Compliance staffing benchmarks. 1952.233 Section 1952.233... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  1. 29 CFR 1952.293 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 9 2011-07-01 2011-07-01 false Compliance staffing benchmarks. 1952.293 Section 1952.293... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  2. 45 CFR 156.100 - State selection of benchmark.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 45 Public Welfare 1 2013-10-01 2013-10-01 false State selection of benchmark. 156.100 Section 156... Essential Health Benefits Package § 156.100 State selection of benchmark. Each State may identify a single EHB-benchmark plan according to the selection criteria described below: (a) State selection of...

  3. 29 CFR 1952.113 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 9 2012-07-01 2012-07-01 false Compliance staffing benchmarks. 1952.113 Section 1952.113... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  4. 29 CFR 1952.343 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 9 2012-07-01 2012-07-01 false Compliance staffing benchmarks. 1952.343 Section 1952.343... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, Compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  5. 29 CFR 1952.163 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 9 2014-07-01 2014-07-01 false Compliance staffing benchmarks. 1952.163 Section 1952.163... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  6. 29 CFR 1952.373 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 9 2012-07-01 2012-07-01 false Compliance staffing benchmarks. 1952.373 Section 1952.373... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  7. 47 CFR 69.108 - Transport rate benchmark.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 3 2013-10-01 2013-10-01 false Transport rate benchmark. 69.108 Section 69.108... Computation of Charges § 69.108 Transport rate benchmark. (a) For transport charges computed in accordance with this subpart, the DS3-to-DS1 benchmark ratio shall be calculated as follows: the telephone...

  8. 29 CFR 1952.213 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 9 2011-07-01 2011-07-01 false Compliance staffing benchmarks. 1952.213 Section 1952.213... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  9. 29 CFR 1952.223 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 9 2012-07-01 2012-07-01 false Compliance staffing benchmarks. 1952.223 Section 1952.223... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  10. 29 CFR 1952.213 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 29 Labor 9 2013-07-01 2013-07-01 false Compliance staffing benchmarks. 1952.213 Section 1952.213... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  11. 42 CFR 414.1255 - Benchmarks for cost measures.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 3 2014-10-01 2014-10-01 false Benchmarks for cost measures. 414.1255 Section 414... Payment Modifier Under the Physician Fee Schedule § 414.1255 Benchmarks for cost measures. (a) For the CY 2015 payment adjustment period, the benchmark for each cost measure is the national mean of...

  12. 29 CFR 1952.223 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 9 2014-07-01 2014-07-01 false Compliance staffing benchmarks. 1952.223 Section 1952.223... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  13. 29 CFR 1952.353 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 9 2011-07-01 2011-07-01 false Compliance staffing benchmarks. 1952.353 Section 1952.353... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  14. 29 CFR 1952.373 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 9 2014-07-01 2014-07-01 false Compliance staffing benchmarks. 1952.373 Section 1952.373... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  15. 29 CFR 1952.323 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 9 2014-07-01 2014-07-01 false Compliance staffing benchmarks. 1952.323 Section 1952.323... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  16. 29 CFR 1952.163 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 9 2012-07-01 2012-07-01 false Compliance staffing benchmarks. 1952.163 Section 1952.163... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall, compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  17. 29 CFR 1952.223 - Compliance staffing benchmarks.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 9 2011-07-01 2011-07-01 false Compliance staffing benchmarks. 1952.223 Section 1952.223... Compliance staffing benchmarks. Under the terms of the 1978 Court Order in AFL-CIO v. Marshall compliance staffing levels (benchmarks) necessary for a “fully effective” enforcement program were required to...

  18. Con: current laboratory benchmarking options are not good enough.

    PubMed

    Reynolds, Debbie

    2006-01-01

    In an ideal world, benchmarking performance in the clinical laboratory would improve performance, quality, and overall patient satisfaction. However, there is a reason why laboratory managers continue to be on the lookout for the perfect benchmarking product--it doesn't exist. As a result, benchmarking performance in the laboratory is inherently flawed. Here is why.

  19. Unstructured Adaptive (UA) NAS Parallel Benchmark. Version 1.0

    NASA Technical Reports Server (NTRS)

    Feng, Huiyu; VanderWijngaart, Rob; Biswas, Rupak; Mavriplis, Catherine

    2004-01-01

    We present a complete specification of a new benchmark for measuring the performance of modern computer systems when solving scientific problems featuring irregular, dynamic memory accesses. It complements the existing NAS Parallel Benchmark suite. The benchmark involves the solution of a stylized heat transfer problem in a cubic domain, discretized on an adaptively refined, unstructured mesh.

  20. Relationship between the TCAP and the Pearson Benchmark Assessment in Elementary Students' Reading and Math Performance in a Northeastern Tennessee School District

    ERIC Educational Resources Information Center

    Dugger-Roberts, Cherith A.

    2014-01-01

    The purpose of this quantitative study was to determine if there was a relationship between the TCAP test and Pearson Benchmark assessment in elementary students' reading and language arts and math performance in a northeastern Tennessee school district. This study involved 3rd, 4th, 5th, and 6th grade students. The study focused on the following…

  1. The PROOF benchmark suite measuring PROOF performance

    NASA Astrophysics Data System (ADS)

    Ryu, S.; Ganis, G.

    2012-06-01

    The PROOF benchmark suite is a new utility suite of PROOF to measure performance and scalability. The primary goal of the benchmark suite is to determine optimal configuration parameters for a set of machines to be used as PROOF cluster. The suite measures the performance of the cluster for a set of standard tasks as a function of the number of effective processes. Cluster administrators can use the suite to measure the performance of the cluster and find optimal configuration parameters. PROOF developers can also utilize the suite to help them measure, identify problems and improve their software. In this paper, the new tool is explained in detail and use cases are presented to illustrate the new tool.

  2. Toxicological benchmarks for wildlife. Environmental Restoration Program

    SciTech Connect

    Opresko, D.M.; Sample, B.E.; Suter, G.W.

    1993-09-01

    This report presents toxicological benchmarks for assessment of effects of 55 chemicals on six representative mammalian wildlife species (short-tailed shrew, white-footed mouse, cottontail ink, red fox, and whitetail deer) and eight avian wildlife species (American robin, woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, Cooper`s hawk, and redtailed hawk) (scientific names are presented in Appendix C). These species were chosen because they are widely distributed and provide a representative range of body sizes and diets. The chemicals are some of those that occur at United States Department of Energy (DOE) waste sites. The benchmarks presented in this report are values believed to be nonhazardous for the listed wildlife species.

  3. Benchmark On Sensitivity Calculation (Phase III)

    SciTech Connect

    Ivanova, Tatiana; Laville, Cedric; Dyrda, James; Mennerdahl, Dennis; Golovko, Yury; Raskach, Kirill; Tsiboulia, Anatoly; Lee, Gil Soo; Woo, Sweng-Woong; Bidaud, Adrien; Patel, Amrit; Bledsoe, Keith C; Rearden, Bradley T; Gulliford, J.

    2012-01-01

    The sensitivities of the keff eigenvalue to neutron cross sections have become commonly used in similarity studies and as part of the validation algorithm for criticality safety assessments. To test calculations of the sensitivity coefficients, a benchmark study (Phase III) has been established by the OECD-NEA/WPNCS/EG UACSA (Expert Group on Uncertainty Analysis for Criticality Safety Assessment). This paper presents some sensitivity results generated by the benchmark participants using various computational tools based upon different computational methods: SCALE/TSUNAMI-3D and -1D, MONK, APOLLO2-MORET 5, DRAGON-SUSD3D and MMKKENO. The study demonstrates the performance of the tools. It also illustrates how model simplifications impact the sensitivity results and demonstrates the importance of 'implicit' (self-shielding) sensitivities. This work has been a useful step towards verification of the existing and developed sensitivity analysis methods.

  4. Assessing and benchmarking multiphoton microscopes for biologists.

    PubMed

    Corbin, Kaitlin; Pinkard, Henry; Peck, Sebastian; Beemiller, Peter; Krummel, Matthew F

    2014-01-01

    Multiphoton microscopy has become staple tool for tracking cells within tissues and organs due to superior depth of penetration, low excitation volumes, and reduced phototoxicity. Many factors, ranging from laser pulse width to relay optics to detectors and electronics, contribute to the overall ability of these microscopes to excite and detect fluorescence deep within tissues. However, we have found that there are few standard ways already described in the literature to distinguish between microscopes or to benchmark existing microscopes to measure the overall quality and efficiency of these instruments. Here, we discuss some simple parameters and methods that can either be used within a multiphoton facility or by a prospective purchaser to benchmark performance. This can both assist in identifying decay in microscope performance and in choosing features of a scope that are suited to experimental needs.

  5. Thermodynamic benchmark study using Biacore technology.

    PubMed

    Navratilova, Iva; Papalia, Giuseppe A; Rich, Rebecca L; Bedinger, Daniel; Brophy, Susan; Condon, Brad; Deng, Ta; Emerick, Anne W; Guan, Hann-Wen; Hayden, Tanya; Heutmekers, Thomas; Hoorelbeke, Bart; McCroskey, Mark C; Murphy, Mary M; Nakagawa, Terry; Parmeggiani, Fabio; Qin, Xiaochun; Rebe, Sabina; Tomasevic, Nenad; Tsang, Tiffany; Waddell, M Brett; Zhang, Fred Feiyu; Leavitt, Stephanie; Myszka, David G

    2007-05-01

    A total of 22 individuals participated in this benchmark study to characterize the thermodynamics of small-molecule inhibitor-enzyme interactions using Biacore instruments. Participants were provided with reagents (the enzyme carbonic anhydrase II, which was immobilized onto the sensor surface, and four sulfonamide-based inhibitors) and were instructed to collect response data from 6 to 36 degrees C. van't Hoff enthalpies and entropies were calculated from the temperature dependence of the binding constants. The equilibrium dissociation and thermodynamic constants determined from the Biacore analysis matched the values determined using isothermal titration calorimetry. These results demonstrate that immobilization of the enzyme onto the sensor surface did not alter the thermodynamics of these interactions. This benchmark study also provides insights into the opportunities and challenges in carrying out thermodynamic studies using optical biosensors.

  6. ASBench: benchmarking sets for allosteric discovery.

    PubMed

    Huang, Wenkang; Wang, Guanqiao; Shen, Qiancheng; Liu, Xinyi; Lu, Shaoyong; Geng, Lv; Huang, Zhimin; Zhang, Jian

    2015-08-01

    Allostery allows for the fine-tuning of protein function. Targeting allosteric sites is gaining increasing recognition as a novel strategy in drug design. The key challenge in the discovery of allosteric sites has strongly motivated the development of computational methods and thus high-quality, publicly accessible standard data have become indispensable. Here, we report benchmarking data for experimentally determined allosteric sites through a complex process, including a 'Core set' with 235 unique allosteric sites and a 'Core-Diversity set' with 147 structurally diverse allosteric sites. These benchmarking sets can be exploited to develop efficient computational methods to predict unknown allosteric sites in proteins and reveal unique allosteric ligand-protein interactions to guide allosteric drug design.

  7. Recommendations for Benchmarking Preclinical Studies of Nanomedicines.

    PubMed

    Dawidczyk, Charlene M; Russell, Luisa M; Searson, Peter C

    2015-10-01

    Nanoparticle-based delivery systems provide new opportunities to overcome the limitations associated with traditional small-molecule drug therapy for cancer and to achieve both therapeutic and diagnostic functions in the same platform. Preclinical trials are generally designed to assess therapeutic potential and not to optimize the design of the delivery platform. Consequently, progress in developing design rules for cancer nanomedicines has been slow, hindering progress in the field. Despite the large number of preclinical trials, several factors restrict comparison and benchmarking of different platforms, including variability in experimental design, reporting of results, and the lack of quantitative data. To solve this problem, we review the variables involved in the design of preclinical trials and propose a protocol for benchmarking that we recommend be included in in vivo preclinical studies of drug-delivery platforms for cancer therapy. This strategy will contribute to building the scientific knowledge base that enables development of design rules and accelerates the translation of new technologies.

  8. Specification for the VERA Depletion Benchmark Suite

    SciTech Connect

    Kim, Kang Seog

    2015-12-17

    CASL-X-2015-1014-000 iii Consortium for Advanced Simulation of LWRs EXECUTIVE SUMMARY The CASL neutronics simulator MPACT is under development for the neutronics and T-H coupled simulation for the pressurized water reactor. MPACT includes the ORIGEN-API and internal depletion module to perform depletion calculations based upon neutron-material reaction and radioactive decay. It is a challenge to validate the depletion capability because of the insufficient measured data. One of the detoured methods to validate it is to perform a code-to-code comparison for benchmark problems. In this study a depletion benchmark suite has been developed and a detailed guideline has been provided to obtain meaningful computational outcomes which can be used in the validation of the MPACT depletion capability.

  9. EXPERIMENTAL BENCHMARKING OF THE MAGNETIZED FRICTION FORCE.

    SciTech Connect

    FEDOTOV, A.V.; GALNANDER, B.; LITVINENKO, V.N.; LOFNES, T.; SIDORIN, A.O.; SMIRNOV, A.V.; ZIEMANN, V.

    2005-09-18

    High-energy electron cooling, presently considered as essential tool for several applications in high-energy and nuclear physics, requires accurate description of the friction force. A series of measurements were performed at CELSIUS with the goal to provide accurate data needed for the benchmarking of theories and simulations. Some results of accurate comparison of experimental data with the friction force formulas are presented.

  10. Aeroelasticity Benchmark Assessment: Subsonic Fixed Wing Program

    NASA Technical Reports Server (NTRS)

    Florance, Jennifer P.; Chwalowski, Pawel; Wieseman, Carol D.

    2010-01-01

    The fundamental technical challenge in computational aeroelasticity is the accurate prediction of unsteady aerodynamic phenomena and the effect on the aeroelastic response of a vehicle. Currently, a benchmarking standard for use in validating the accuracy of computational aeroelasticity codes does not exist. Many aeroelastic data sets have been obtained in wind-tunnel and flight testing throughout the world; however, none have been globally presented or accepted as an ideal data set. There are numerous reasons for this. One reason is that often, such aeroelastic data sets focus on the aeroelastic phenomena alone (flutter, for example) and do not contain associated information such as unsteady pressures and time-correlated structural dynamic deflections. Other available data sets focus solely on the unsteady pressures and do not address the aeroelastic phenomena. Other discrepancies can include omission of relevant data, such as flutter frequency and / or the acquisition of only qualitative deflection data. In addition to these content deficiencies, all of the available data sets present both experimental and computational technical challenges. Experimental issues include facility influences, nonlinearities beyond those being modeled, and data processing. From the computational perspective, technical challenges include modeling geometric complexities, coupling between the flow and the structure, grid issues, and boundary conditions. The Aeroelasticity Benchmark Assessment task seeks to examine the existing potential experimental data sets and ultimately choose the one that is viewed as the most suitable for computational benchmarking. An initial computational evaluation of that configuration will then be performed using the Langley-developed computational fluid dynamics (CFD) software FUN3D1 as part of its code validation process. In addition to the benchmarking activity, this task also includes an examination of future research directions. Researchers within the

  11. Collection of Neutronic VVER Reactor Benchmarks.

    2002-01-30

    Version 00 A system of computational neutronic benchmarks has been developed. In this CD-ROM report, the data generated in the course of the project are reproduced in their integrity with minor corrections. The editing that was performed on the various documents comprising this report was primarily meant to facilitate the production of the CD-ROM and to enable electronic retrieval of the information. The files are electronically navigable.

  12. Cray performance data from five benchmarks

    NASA Technical Reports Server (NTRS)

    Pennline, James A.

    1991-01-01

    The five benchmark programs discussed in TM-88956, February 1987, were run on the CRAY X-MP/24 under different operating systems and compilers. Performance data is reported for runs under early versions of UNICOS and CFT77. The most recent data includes a system of configuration for a X-MP hardware upgrade. Performance figures for the Y-MP are shown for comparison. Differences in the figures are analyzed and discussed.

  13. A Simplified HTTR Diffusion Theory Benchmark

    SciTech Connect

    Rodolfo M. Ferrer; Abderrafi M. Ougouag; Farzad Rahnema

    2010-10-01

    The Georgia Institute of Technology (GA-Tech) recently developed a transport theory benchmark based closely on the geometry and the features of the HTTR reactor that is operational in Japan. Though simplified, the benchmark retains all the principal physical features of the reactor and thus provides a realistic and challenging test for the codes. The purpose of this paper is twofold. The first goal is an extension of the benchmark to diffusion theory applications by generating the additional data not provided in the GA-Tech prior work. The second goal is to use the benchmark on the HEXPEDITE code available to the INL. The HEXPEDITE code is a Green’s function-based neutron diffusion code in 3D hexagonal-z geometry. The results showed that the HEXPEDITE code accurately reproduces the effective multiplication factor of the reference HELIOS solution. A secondary, but no less important, conclusion is that in the testing against actual HTTR data of a full sequence of codes that would include HEXPEDITE, in the apportioning of inevitable discrepancies between experiment and models, the portion of error attributable to HEXPEDITE would be expected to be modest. If large discrepancies are observed, they would have to be explained by errors in the data fed into HEXPEDITE. Results based on a fully realistic model of the HTTR reactor are presented in a companion paper. The suite of codes used in that paper also includes HEXPEDITE. The results shown here should help that effort in the decision making process for refining the modeling steps in the full sequence of codes.

  14. Comparison of market hog characteristics of pigs selected by feeder pig frame size or current USDA feeder pig grade standards.

    PubMed

    Siemens, A L; Lipsey, R J; Hedrick, H B; Williams, F L; Yokley, S W; Siemens, M G

    1990-08-01

    Two feeder pig grading systems were tested. Forty-five barrows were selected using current USDA Feeder Pig Grade Standards (U.S. No. 1, No. 2 and No. 3). Additionally, 45 barrows were selected using three frame sizes (large, medium and small). Pigs were slaughtered at 100, 113.5 of 127 kg live weight. Trimmed four lean cuts were separated into soft tissue, skin and bone. The skinless belly and soft tissue from the four lean cuts were ground separately and analyzed chemically. Data from each grading system were analyzed separately in a 3 X 3 factorial plan. Pigs selected using current USDA grade standards differed (P less than .05) for last rib backfat, 10th rib fat depth, longissimus muscle area, percentage of trimmed four lean cuts and USDA carcass grade. In the frame size system, pigs with large frame size had less last rib backfat, less 10th rib fat depth, longer carcasses, higher percentage of four lean cuts and superior USDA carcass grades than pigs with small frame size did (P less than .05). The Bradley and Schumann test of sensitivity showed that selection by frame size was more sensitive than current USDA grade standards for discriminating feeder pig foreleg length, body depth and ham width. In addition, selection by frame size was more sensitive than current USDA grade standards for discriminating carcass length and carcass radius length. No increase in sensitivity (P greater than .10) was noted for carcass composition or growth traits over the current USDA Feeder Pig Grade Standards.

  15. Introduction to the HPC Challenge Benchmark Suite

    SciTech Connect

    Luszczek, Piotr; Dongarra, Jack J.; Koester, David; Rabenseifner,Rolf; Lucas, Bob; Kepner, Jeremy; McCalpin, John; Bailey, David; Takahashi, Daisuke

    2005-04-25

    The HPC Challenge benchmark suite has been released by the DARPA HPCS program to help define the performance boundaries of future Petascale computing systems. HPC Challenge is a suite of tests that examine the performance of HPC architectures using kernels with memory access patterns more challenging than those of the High Performance Linpack (HPL) benchmark used in the Top500 list. Thus, the suite is designed to augment the Top500 list, providing benchmarks that bound the performance of many real applications as a function of memory access characteristics e.g., spatial and temporal locality, and providing a framework for including additional tests. In particular, the suite is composed of several well known computational kernels (STREAM, HPL, matrix multiply--DGEMM, parallel matrix transpose--PTRANS, FFT, RandomAccess, and bandwidth/latency tests--b{sub eff}) that attempt to span high and low spatial and temporal locality space. By design, the HPC Challenge tests are scalable with the size of data sets being a function of the largest HPL matrix for the tested system.

  16. A PWR Thorium Pin Cell Burnup Benchmark

    SciTech Connect

    Weaver, Kevan Dean; Zhao, X.; Pilat, E. E; Hejzlar, P.

    2000-05-01

    As part of work to evaluate the potential benefits of using thorium in LWR fuel, a thorium fueled benchmark comparison was made in this study between state-of-the-art codes, MOCUP (MCNP4B + ORIGEN2), and CASMO-4 for burnup calculations. The MOCUP runs were done individually at MIT and INEEL, using the same model but with some differences in techniques and cross section libraries. Eigenvalue and isotope concentrations were compared on a PWR pin cell model up to high burnup. The eigenvalue comparison as a function of burnup is good: the maximum difference is within 2% and the average absolute difference less than 1%. The isotope concentration comparisons are better than a set of MOX fuel benchmarks and comparable to a set of uranium fuel benchmarks reported in the literature. The actinide and fission product data sources used in the MOCUP burnup calculations for a typical thorium fuel are documented. Reasons for code vs code differences are analyzed and discussed.

  17. Benchmarking and accounting for the (private) cloud

    NASA Astrophysics Data System (ADS)

    Belleman, J.; Schwickerath, U.

    2015-12-01

    During the past two years large parts of the CERN batch farm have been moved to virtual machines running on the CERN internal cloud. During this process a large fraction of the resources, which had previously been used as physical batch worker nodes, were converted into hypervisors. Due to the large spread of the per-core performance in the farm, caused by its heterogenous nature, it is necessary to have a good knowledge of the performance of the virtual machines. This information is used both for scheduling in the batch system and for accounting. While in the previous setup worker nodes were classified and benchmarked based on the purchase order number, for virtual batch worker nodes this is no longer possible; the information is now either hidden or hard to retrieve. Therefore we developed a new scheme to classify worker nodes according to their performance. The new scheme is flexible enough to be usable both for virtual and physical machines in the batch farm. With the new classification it is possible to have an estimation of the performance of worker nodes also in a very dynamic farm with worker nodes coming and going at a high rate, without the need to benchmark each new node again. An extension to public cloud resources is possible if all conditions under which the benchmark numbers have been obtained are fulfilled.

  18. Perspective: Selected benchmarks from commercial CFD codes

    SciTech Connect

    Freitas, C.J.

    1995-06-01

    This paper summarizes the results of a series of five benchmark simulations which were completed using commercial Computational Fluid Dynamics (CFD) codes. These simulations were performed by the vendors themselves, and then reported by them in ASME`s CFD Triathlon Forum and CFD Biathlon Forum. The first group of benchmarks consisted of three laminar flow problems. These were the steady, two-dimensional flow over a backward-facing step, the low Reynolds number flow around a circular cylinder, and the unsteady three-dimensional flow in a shear-driven cubical cavity. The second group of benchmarks consisted of two turbulent flow problems. These were the two-dimensional flow around a square cylinder with periodic separated flow phenomena, and the stead, three-dimensional flow in a 180-degree square bend. All simulation results were evaluated against existing experimental data nd thereby satisfied item 10 of the Journal`s policy statement for numerical accuracy. The objective of this exercise was to provide the engineering and scientific community with a common reference point for the evaluation of commercial CFD codes.

  19. Benchmarking Global Food Safety Performances: The Era of Risk Intelligence.

    PubMed

    Valleé, Jean-Charles Le; Charlebois, Sylvain

    2015-10-01

    Food safety data segmentation and limitations hamper the world's ability to select, build up, monitor, and evaluate food safety performance. Currently, there is no metric that captures the entire food safety system, and performance data are not collected strategically on a global scale. Therefore, food safety benchmarking is essential not only to help monitor ongoing performance but also to inform continued food safety system design, adoption, and implementation toward more efficient and effective food safety preparedness, responsiveness, and accountability. This comparative study identifies and evaluates common elements among global food safety systems. It provides an overall world ranking of food safety performance for 17 Organisation for Economic Co-Operation and Development (OECD) countries, illustrated by 10 indicators organized across three food safety risk governance domains: risk assessment (chemical risks, microbial risks, and national reporting on food consumption), risk management (national food safety capacities, food recalls, food traceability, and radionuclides standards), and risk communication (allergenic risks, labeling, and public trust). Results show all countries have very high food safety standards, but Canada and Ireland, followed by France, earned excellent grades relative to their peers. However, any subsequent global ranking study should consider the development of survey instruments to gather adequate and comparable national evidence on food safety.

  20. Benchmarking Global Food Safety Performances: The Era of Risk Intelligence.

    PubMed

    Valleé, Jean-Charles Le; Charlebois, Sylvain

    2015-10-01

    Food safety data segmentation and limitations hamper the world's ability to select, build up, monitor, and evaluate food safety performance. Currently, there is no metric that captures the entire food safety system, and performance data are not collected strategically on a global scale. Therefore, food safety benchmarking is essential not only to help monitor ongoing performance but also to inform continued food safety system design, adoption, and implementation toward more efficient and effective food safety preparedness, responsiveness, and accountability. This comparative study identifies and evaluates common elements among global food safety systems. It provides an overall world ranking of food safety performance for 17 Organisation for Economic Co-Operation and Development (OECD) countries, illustrated by 10 indicators organized across three food safety risk governance domains: risk assessment (chemical risks, microbial risks, and national reporting on food consumption), risk management (national food safety capacities, food recalls, food traceability, and radionuclides standards), and risk communication (allergenic risks, labeling, and public trust). Results show all countries have very high food safety standards, but Canada and Ireland, followed by France, earned excellent grades relative to their peers. However, any subsequent global ranking study should consider the development of survey instruments to gather adequate and comparable national evidence on food safety. PMID:26408141