Science.gov

Sample records for 10th grade benchmark

  1. Marijuana Use Among 10th Grade Students - Washington, 2014.

    PubMed

    Shah, Anar; Stahre, Mandy

    2016-12-30

    Some studies have suggested that long-term, regular use of marijuana starting in adolescence might impair brain development and lower intelligence quotient (1,2). Since 2012, purchase of recreational or retail marijuana has become legal for persons aged ≥21 years in the District of Columbia, Alaska, California, Colorado, Maine, Massachusetts, Nevada, Oregon, and Washington, raising concern about increased marijuana access by youths. The law taxing and regulating recreational or retail marijuana was approved by Washington voters in 2012 and the first retail licenses were issued in July 2014; medical marijuana use has been legal since 1998. To examine the prevalence, characteristics, and behaviors of current marijuana users among 10th grade students, the Washington State Department of Health analyzed data from the state's 2014 Healthy Youth Survey (HYS) regarding current marijuana use. In 2014, 18.1% of 10th grade students (usually aged 15-16 years) reported using marijuana during the preceding 30 days; of these students, 32% reported using it on ≥10 days. Among the marijuana users, 65% reported obtaining marijuana through their peer networks, which included friends, older siblings, or at a party. Identification of comprehensive and sustainable public health interventions are needed to prevent and reduce youth marijuana use. Establishment of state and jurisdiction surveillance of youth marijuana use could be useful to anticipate and monitor the effects of legalization and track trends in use before states consider legalizing recreational or retail marijuana.

  2. An Early Warning System: Predicting 10th Grade FCAT Success from 6th Grade FCAT Performance. Research Brief. Volume 0711

    ERIC Educational Resources Information Center

    Froman, Terry; Brown, Shelly; Lapadula, Maria

    2008-01-01

    This Research Brief presents a method for predicting 10th grade Florida Comprehensive Assessment Test (FCAT) success from 6th grade FCAT performance. A simple equation provides the most probable single score prediction, and give-or-take error margins define high and low probability zones for expected 10th grade scores. In addition, a double-entry…

  3. Predicting 10th Grade FCAT Success. Research Brief. Volume 0401

    ERIC Educational Resources Information Center

    Froman, Terry; Bayne, Joseph

    2004-01-01

    Florida law requires that students achieve a passing score on the Grade 10 Florida Comprehensive Assessment Test (FCAT) to qualify for a standard high school diploma (Section 1008.22(3)(c)5, Florida Statutes). Students who were administered the Grade 10 FCAT for the first time during the 2002 administrations or later must earn a developmental…

  4. Project TALENT's Nonrespondent Follow-up Survey: The 10th Grade Special Sample. Interim Report.

    ERIC Educational Resources Information Center

    Carrel, Kathleen S.; And Others

    Described are procedures used in the location of a sample of individuals not responding to follow-up questionnaires, eleven years after they were originally interviewed in 1960 as 10th graders. The individuals in question were a subset of more than 400,000 9th, 10th, 11th and 12th grade students used in Project TALENT's longitudinal study of the…

  5. The Effect of Case-Based Instruction on 10th Grade Students' Understanding of Gas Concepts

    ERIC Educational Resources Information Center

    Yalçinkaya, Eylem; Boz, Yezdan

    2015-01-01

    The main purpose of the present study was to investigate the effect of case-based instruction on remedying 10th grade students' alternative conceptions related to gas concepts. 128 tenth grade students from two high schools participated in this study. In each school, one of the classes was randomly assigned as the experimental group and the other…

  6. Problem-Based Learning Method: Secondary Education 10th Grade Chemistry Course Mixtures Topic

    ERIC Educational Resources Information Center

    Üce, Musa; Ates, Ismail

    2016-01-01

    In this research; aim was determining student achievement by comparing problem-based learning method with teacher-centered traditional method of teaching 10th grade chemistry lesson mixtures topic. Pretest-posttest control group research design is implemented. Research sample includes; two classes of (total of 48 students) an Anatolian High School…

  7. A Structural Model of Student Career Aspiration and Science Education: The 10th Grade Investigation.

    ERIC Educational Resources Information Center

    Wang, Jianjun; Turner, Dianne

    Career aspiration is an important factor articulating student academic preparation and career orientation. On the basis of H. Walberg's educational productivity theory, 10th grade national data from the Longitudinal Study of American Youth have been analyzed to examine structural relations between educational productivity and career aspiration in…

  8. Predicting 3rd Grade and 10th Grade FCAT Success for 2006-07. Research Brief. Volume 0601

    ERIC Educational Resources Information Center

    Froman, Terry; Rubiera, Vilma

    2006-01-01

    For the past few years the Florida School Code has set the Florida Comprehensive Assessment Test (FCAT) performance requirements for promotion of 3rd graders and graduation for 10th graders. Grade 3 students who do not score at level 2 or higher on the FCAT SSS Reading must be retained unless exempted for special circumstances. Grade 10 students…

  9. Development of 10th Grade Norms for the ASVAB (Armed Services Vocational Aptitude Battery).

    DTIC Science & Technology

    1987-05-01

    OPERATIO D R DIVOI ET AL UNCLASSIFIED MAY 87 CRC-562 N88814-87-C-088i F/G 5 /8 NL. EIh EE EEEnhEhhEEEjhh *i ""’ 1. LI.2 IIuu, - 11111-112.2 02. 1125I...APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED. 41 "ERFORMING ORGANIZA71ON REPORT %uM8ER(S) 5 . MONITORING ORGANIZATION REPORT NU MBER(S) CRC 562 ASa...copies) ’ 5 ’, Si S .. j’- , CRC 562 /May 1987 DEVELOPMENT OF 10TH GRADE NORMS FOR THE ASVAB D- R. Divgi Gary E. Home Marine Corps Operations Analysis Group

  10. The Implementation of Effective Teaching Practices in English Classroom for Grades 8th, 9th, and 10th.

    ERIC Educational Resources Information Center

    Al-Hilawani, Yasser A.; And Others

    This study explored teachers' behavior as related to effective teaching practices in 8th, 9th, and 10th grade English classrooms in Jordan. The study also examined some variables that could predict teachers' implementation of effective teaching practices and aimed at finding an estimate of the percentage of students in 8th, 9th, and 10th grades…

  11. Does STES-Oriented Science Education Promote 10th-Grade Students' Decision-Making Capability?

    NASA Astrophysics Data System (ADS)

    Levy Nahum, Tami; Ben-Chaim, David; Azaiza, Ibtesam; Herskovitz, Orit; Zoller, Uri

    2010-07-01

    Today's society is continuously coping with sustainability-related complex issues in the Science-Technology-Environment-Society (STES) interfaces. In those contexts, the need and relevance of the development of students' higher-order cognitive skills (HOCS) such as question-asking, critical-thinking, problem-solving and decision-making capabilities within science teaching have been argued by several science educators for decades. Three main objectives guided this study: (1) to establish "base lines" for HOCS capabilities of 10th grade students (n = 264) in the Israeli educational system; (2) to delineate within this population, two different groups with respect to their decision-making capability, science-oriented (n = 142) and non-science (n = 122) students, Groups A and B, respectively; and (3) to assess the pre-post development/change of students' decision-making capabilities via STES-oriented HOCS-promoting curricular modules entitled Science, Technology and Environment in Modern Society (STEMS). A specially developed and validated decision-making questionnaire was used for obtaining a research-based response to the guiding research questions. Our findings suggest that a long-term persistent application of purposed decision-making, promoting teaching strategies, is needed in order to succeed in affecting, positively, high-school students' decision-making ability. The need for science teachers' involvement in the development of their students' HOCS capabilities is thus apparent.

  12. School Climate and the Relationship to Student Learning of Hispanic 10th Grade Students in Arizona Schools

    ERIC Educational Resources Information Center

    Nava Delgado, Mauricio

    2011-01-01

    This study provided an analysis of Hispanic 10th grade student academic achievement in the areas of mathematics, reading and writing as measured by the Arizona's Instrument to Measure Standards. The study is based on data of 163 school districts and 25,103 (95%) students in the state of Arizona as published by the Arizona Department of Education.…

  13. Examining General and Specific Factors in the Dimensionality of Oral Language and Reading in 4th-10th Grades

    ERIC Educational Resources Information Center

    Foorman, Barbara R.; Koon, Sharon; Petscher, Yaacov; Mitchell, Alison; Truckenmiller, Adrea

    2015-01-01

    The objective of this study was to explore dimensions of oral language and reading and their influence on reading comprehension in a relatively understudied population--adolescent readers in 4th through 10th grades. The current study employed latent variable modeling of decoding fluency, vocabulary, syntax, and reading comprehension so as to…

  14. Investigating the Effects of a DNA Fingerprinting Workshop on 10th Grade Students' Self Efficacy and Attitudes toward Science.

    ERIC Educational Resources Information Center

    Sonmez, Duygu; Simcox, Amanda

    The purpose of this study was investigate the effects of a DNA Fingerprinting Workshop on 10th grade students' self efficacy and attitudes toward science. The content of the workshop based on high school science curriculum and includes multimedia instruction, laboratory experiment and participation of undergraduate students as mentors. N=93…

  15. Influence of V-Diagrams on 10th Grade Turkish Students' Achievement in the Subject of Mechanical Waves

    ERIC Educational Resources Information Center

    Tekes, Hanife; Gonen, Selahattin

    2012-01-01

    The purpose of the present study was to examine how the use of V-diagrams one of the learning techniques used in laboratory studies in experiments conducted regarding the 10th grade lesson unit of "waves" influenced students' achievements. In the study, a quasi-experimental design with a pretest and posttest control group was used. The…

  16. Progression in Complexity: Contextualizing Sustainable Marine Resources Management in a 10th Grade Classroom

    NASA Astrophysics Data System (ADS)

    Bravo-Torija, Beatriz; Jiménez-Aleixandre, María-Pilar

    2012-01-01

    Sustainable management of marine resources raises great challenges. Working with this socio-scientific issue in the classroom requires students to apply complex models about energy flow and trophic pyramids in order to understand that food chains represent transfer of energy, to construct meanings for sustainable resources management through discourse, and to connect them to actions and decisions in a real-life context. In this paper we examine the process of elaboration of plans for resources management in a marine ecosystem by 10th grade students (15-16 year) in the context of solving an authentic task. A complete class ( N = 14) worked in a sequence about ecosystems. Working in small groups, the students made models of energy flow and trophic pyramids, and used them to solve the problem of feeding a small community for a long time. Data collection included videotaping and audiotaping of all of the sessions, and collecting the students' written productions. The research objective is to examine the process of designing a plan for sustainable resources management in terms of the discursive moves of the students across stages in contextualizing practices, or different degrees of complexity (Jiménez-Aleixandre & Reigosa International Journal of Science Education, 14(1): 51-61 2006), understood as transformations from theoretical statements to decisions about the plan. The analysis of students' discursive moves shows how the groups progressed through stages of connecting different models, between them and with the context, in order to solve the task. The challenges related to taking this sustainability issue to the classroom are discussed.

  17. The Earlier the Better? Taking the AP® in 10th Grade. Research Report No. 2012-10

    ERIC Educational Resources Information Center

    Rodriguez, Awilda; McKillip, Mary E. M.; Niu, Sunny X.

    2013-01-01

    In this report, the authors examine the impact of scoring a 1 or 2 on an AP® Exam in 10th grade on later AP Exam participation and performance. As access to AP courses increases within and across schools, a growing number of students are taking AP courses and exams in the earlier grades of high school. Using a matched sample of AP and no-AP…

  18. Changes in Educational Expectations between 10th and 12th Grades across Cohorts

    ERIC Educational Resources Information Center

    Park, Sueuk; Wells, Ryan; Bills, David

    2015-01-01

    The mean levels of educational expectations of American high school students have increased over the past generation; individual educational expectations change as students mature. Using the National Education Longitudinal Study and the Education Longitudinal Study, we examined simultaneously the changes in individuals' expectations from 10th to…

  19. Evaluation of the 10th Grade Computerized Mathematics Curriculum from the Perspective of the Teachers and Educational Supervisors in the Southern Region in Jordan

    ERIC Educational Resources Information Center

    Al-Tarawneh, Sabri Hassan; Al-Qadi, Haitham Mamdouh

    2016-01-01

    This study aimed at evaluating the 10th grade computerized mathematics curriculum from the perspective of the teachers and supervisors in the southern region in Jordan. The study population consisted of all the teachers who teach the 10th grade in the southern region, with the total of (309) teachers and (20) supervisors. The sample consisted of…

  20. Predicting 3rd Grade and 10th Grade FCAT Success for 2007-08. Research Brief. Volume 0702

    ERIC Educational Resources Information Center

    Froman, Terry; Rubiera, Vilma

    2008-01-01

    For the past few years the Florida School Code has set the Florida Comprehensive Assessment Test (FCAT) performance requirements for promotion of 3rd graders and graduation for 10 graders. Grade 3 students who do not score at level 2 or higher on the FCAT SSS Reading must be retained unless exempted for special circumstances. Grade 10 students…

  1. Examining General and Specific Factors in the Dimensionality of Oral Language and Reading in 4th–10th Grades

    PubMed Central

    Foorman, Barbara R.; Koon, Sharon; Petscher, Yaacov; Mitchell, Alison; Truckenmiller, Adrea

    2015-01-01

    The objective of this study was to explore dimensions of oral language and reading and their influence on reading comprehension in a relatively understudied population—adolescent readers in 4th through 10th grades. The current study employed latent variable modeling of decoding fluency, vocabulary, syntax, and reading comprehension so as to represent these constructs with minimal error and to examine whether residual variance unaccounted for by oral language can be captured by specific factors of syntax and vocabulary. A 1-, 3-, 4-, and bifactor model were tested with 1,792 students in 18 schools in 2 large urban districts in the Southeast. Students were individually administered measures of expressive and receptive vocabulary, syntax, and decoding fluency in mid-year. At the end of the year students took the state reading test as well as a group-administered, norm-referenced test of reading comprehension. The bifactor model fit the data best in all 7 grades and explained 72% to 99% of the variance in reading comprehension. The specific factors of syntax and vocabulary explained significant unique variance in reading comprehension in 1 grade each. The decoding fluency factor was significantly correlated with the reading comprehension and oral language factors in all grades, but, in the presence of the oral language factor, was not significantly associated with the reading comprehension factor. Results support a bifactor model of lexical knowledge rather than the 3-factor model of the Simple View of Reading, with the vast amount of variance in reading comprehension explained by a general oral language factor. PMID:26346839

  2. Energy-drink consumption and its relationship with substance use and sensation seeking among 10th grade students in Istanbul.

    PubMed

    Evren, Cuneyt; Evren, Bilge

    2015-06-01

    Aim of this study was to determine the prevalence and correlates of energy-drink (ED) consumption among 10th grade students in Istanbul/Turkey. Cross-sectional online self-report survey conducted in 45 schools from the 15 districts in Istanbul. The questionnaire included sections about demographic data, self-destructive behavior and use of substances including tobacco, alcohol and drugs. Also Psychological Screening Test for Adolescents (PSTA) was used. The analyses were conducted based on the 4957 subjects. Rate of those reported a ED consumption once within last year was 62.0% (n=3072), whereas rate of those reported ED consumption at least once in a month was 31.1%. There were consistent, statistically significant associations between genders, lifetime substance use (tobacco, alcohol and drug use), measures of sensation seeking, psychological problems (depression, anxiety, anger, impulsivity) and self-destructive behavior (self-harming behavior and suicidal thoughts) with ED consumption. In logistic regression models male gender, sensation seeking, life-time tobacco, alcohol and drug use predicted all frequencies of ED consumption. In addition to these predictors, anger and self-harming behavior also predicted ED consumption at least once in a month. There were no interactions between the associations of lifetime tobacco, alcohol and drug use with ED consumption. The findings suggest that the ED consumption of male students is related with three clusters of substances (tobacco, alcohol and drug) through sensation seeking and these relationships do not interact with each other.

  3. Perceptions of 9th and 10th Grade Students on How Their Environment, Cognition, and Behavior Motivate Them in Algebra and Geometry Courses

    ERIC Educational Resources Information Center

    Harootunian, Alen

    2012-01-01

    In this study, relationships were examined between students' perception of their cognition, behavior, environment, and motivation. The purpose of the research study was to explore the extent to which 9th and 10th grade students' perception of environment, cognition, and behavior can predict their motivation in Algebra and Geometry courses. A…

  4. An Examination of the Conditions of School Facilities Attended by 10th-Grade Students in 2002. E.D. TAB. NCES 2006-302

    ERIC Educational Resources Information Center

    Mike Planty; Jill F. DeVoe; Jeffrey A. Owings; Kathryn Chandler

    2005-01-01

    This report presents key findings from the Education Longitudinal Study of 2002 (ELS:2002) Facilities Checklist for all ELS:2002 public and private schools and students in the 10th grade. The facilities instrument was administered as a part of the ELS:2002 and focused on the conditions of school facilities, including disrepair, cleanliness,…

  5. The Basic Program of Vocational Agriculture in Louisiana. Ag I and Ag II (9th and 10th Grades). Volume I. Bulletin 1690-I.

    ERIC Educational Resources Information Center

    Louisiana State Dept. of Education, Baton Rouge. Div. of Vocational Education.

    This document is the first volume of a state curriculum guide on vocational agriculture for use in the 9th and 10th grades in Louisiana. Three instructional areas are profiled in this volume: orientation to vocational agriculture, agricultural leadership, and soil science. The three units of the orientation area cover introducing beginning…

  6. A Comparison of 9th and 10th Grade Boys' and Girls' Bullying Behaviors in Two States.

    ERIC Educational Resources Information Center

    Isernhagen, Jody; Harris, Sandy

    This study examined the incidences of bullying behaviors among male and female 9th and 10th graders in rural Nebraska and suburban Texas schools. Nebraska students were predominantly Caucasion, and Texas students were African American, Hispanic American, and Caucasion. Student surveys examined such issues as how often bullying occurred, where it…

  7. Analysis of physics textbooks for 10th and 11th grades in accordance with the 2013 secondary school physics curriculum from the perspective of project-based learning

    NASA Astrophysics Data System (ADS)

    Kavcar, Nevzat; Erdem, Aytekin

    2017-02-01

    This study aims to investigate the 10th and 11th grade Physics textbooks in accordance with the 2013 Secondary School Physics Curriculum from the perspective of project-based learning method and to share the results with the physics education public. The research was carried out in the 2015-2016 academic year as part of an undergraduate course taught in physics teaching program at a faculty of education; and 10 senior students of physics teachercandidates participated in the study. The research method is the survey model based on qualitative research approach. Data collection tools consist of the reports written by the participants who examined the curriculum and textbooks for project-based learning problems. According to research findings, most of the educational gains in the 10th and 11th grade physics textbooks were supported with experimental activities; however, project-based assignments are needed.

  8. Growth: How Much is Too Much? Student Book. Science Module (9th-10th Grade Biology). Revised Edition.

    ERIC Educational Resources Information Center

    Georgia Univ., Athens. Coll. of Education.

    This learning module is designed to integrate environmental education into ninth- and tenth-grade chemistry classes. This module and a companion social studies module were pilot tested in Gwinnett County, Georgia in 1975-76. The module is divided into four parts. Part one provides a broad overview of unit content and proposes questions to…

  9. A Typology of Chemistry Classroom Environments: Exploring the Relationships between 10th Grade Students' Perceptions, Attitudes and Gender

    ERIC Educational Resources Information Center

    Giallousi, M.; Gialamas, V.; Pavlatou, E. A.

    2013-01-01

    The present study was the first in Greece in which educational effectiveness theory constituted a knowledge base for investigating the impact of chemistry classroom environment in 10 Grade students' enjoyment of class. An interpretive heuristic schema was developed and utilised in order to incorporate two factors of teacher behaviour at classroom…

  10. Water: How Good is Good Enough? Teacher's Guide. Social Studies Module (9th-10th Grade Social Studies).

    ERIC Educational Resources Information Center

    Georgia Univ., Athens. Coll. of Education.

    This teacher's guide is for an environmental education module to integrate topics of water quality in ninth- and tenth-grade social studies classes. This module was pilot tested in Gwinnett County, Georgia in 1975-76. Included in the guide are overall objectives, the module sequence, an introduction, a suggested teaching sequence, a word review…

  11. Water: How Good is Good Enough? Student Book. Science Module (9th-10th Grade Chemistry). Revised Edition.

    ERIC Educational Resources Information Center

    Georgia Univ., Athens. Coll. of Education.

    This learning module is designed to integrate environmental education into ninth- and tenth-grade chemistry classes. This module and a companion social studies module were pilot tested in Gwinnett County, Georgia in classes of students, many of whom had learning disabilities. It emphasizes activity learning. The module is divided into four parts.…

  12. Climate Change, Risks and Natural Resources didactic issues of educational content geography of Bulgaria and the world in 9th and 10th grade

    NASA Astrophysics Data System (ADS)

    Dermendzhieva, Stela; Nejdet, Semra

    2017-03-01

    The purpose of this paper is to follow "Climate change, risks and Natural Resources" in the curriculum of Geography of Bulgaria and the world in 9th and 10th grade and to interpret some didactic aspects. Analysis of key themes, concepts and categories related to the environment, events and approaches to environmental protection and the environmentally sound development of sectors of the economy is didikticheski targeted. Considering the emergence and development of geo-ecological issues, their scope and their importance to the environment, systematize some species and some approaches to solving them. Geography education in grade 9 and 10 involves acquiring knowledge, developing skills and composing behaviors of objective perception and assessment of the reality of globed, regional and local aspect. The emerging consumer and individualistic culture snowballing globalization, are increasingly occurring global warming, declining biodiversity form new realities which education must respond appropriately. The objective, consistency, accessibility and relevance in real terms are meaningful, logical accents. Whether and how reproduced in the study of Geography of Bulgaria and the world is the subject of research study in this report. Geoecological structuring of topics, concepts and categories can be done in different signs. In terms of their scope are local, national or regional, and global. Matter and interdisciplinary approach, which is to reveal the unity of the "man-society-nature" to clarify the complexity of their character with a view to forming a harmonious personality with high Geoecological consciousness and culture, and the activities carried out in their study.

  13. Mountain Dew[R] or Mountain Don't?: A Pilot Investigation of Caffeine Use Parameters and Relations to Depression and Anxiety Symptoms in 5th- and 10th-Grade Students

    ERIC Educational Resources Information Center

    Luebbe, Aaron M.; Bell, Debora J.

    2009-01-01

    Background: Caffeine, the only licit psychoactive drug available to minors, may have a harmful impact on students' health and adjustment, yet little is known about its use or effects on students, especially from a developmental perspective. Caffeine use in 5th- and 10th-grade students was examined in a cross-sectional design, and relations and…

  14. Material Analysis and Processing Systems: A 9th and/or 10th Grade Industrial Education Curriculum Designed To Fulfill the Kansas State Department of Vocational Education's Level 2 Course Requirements.

    ERIC Educational Resources Information Center

    Dean, Harvey R., Ed.

    The teacher developed curriculum guide provides the industrial education teacher with the objectives, equipment lists, material, supplies, references, and activities necessary to teach students of the 9th and/or 10th grade the concepts of interrelationships between material analysis and processing systems. Career information and sociological…

  15. Power Conversion and Transmission Systems: A 9th and/or 10th Grade Industrial Education Curriculum Designed To Fulfill the Kansas State Department of Vocational Education's Level 2 Course Requirements.

    ERIC Educational Resources Information Center

    Dean, Harvey R., Ed.

    The document is a guide to a 9th and 10th grade industrial education course investigating the total system of power--how man controls, converts, transmits, and uses energy; the rationale is that if one is to learn of the total system of industry, the subsystem of power must be investigated. The guide provides a "body of knowledge" chart…

  16. The impact of high-stakes, state-mandated student performance assessment on 10th grade English, mathematics, and science teachers' instructional practices

    NASA Astrophysics Data System (ADS)

    Vogler, Kenneth E.

    The purpose of this study was to determine if the public release of student results on high-stakes, state-mandated performance assessments influence instructional practices, and if so in what manner. The research focused on changes in teachers' instructional practices and factors that may have influenced such changes since the public release of high-stakes, state-mandated student performance assessment scores. The data for this study were obtained from a 54-question survey instrument given to a stratified random sample of teachers teaching at least one section of 10th grade English, mathematics, or science in an academic public high school within Massachusetts. Two hundred and fifty-seven (257) teachers, or 62% of the total sample, completed the survey instrument. An analysis of the data found that teachers are making changes in their instructional practices. The data show notable increases in the use of open-response questions, creative/critical thinking questions, problem-solving activities, use of rubrics or scoring guides, writing assignments, and inquiry/investigation. Teachers also have decreased the use of multiple-choice and true-false questions, textbook-based assignments, and lecturing. Also, the data show that teachers felt that changes made in their instructional practices were most influenced by an "interest in helping my students attain MCAS assessment scores that will allow them to graduate high school" and by an "interest in helping my school improve student (MCAS) assessment scores," Finally, mathematics teachers and teachers with 13--19 years of experience report making significantly more changes than did others. It may be interpreted from the data that the use of state-mandated student performance assessments and the high-stakes attached to this type of testing program contributed to changes in teachers' instructional practices. The changes in teachers' instructional practices have included increases in the use of instructional practices deemed

  17. Affective decision-making deficits, linked to a dysfunctional ventromedial prefrontal cortex, revealed in 10th grade Chinese adolescent binge drinkers.

    PubMed

    Johnson, C Anderson; Xiao, Lin; Palmer, Paula; Sun, Ping; Wang, Qiong; Wei, Yonglan; Jia, Yong; Grenard, Jerry L; Stacy, Alan W; Bechara, Antoine

    2008-01-31

    The primary aim of this study was to test the hypothesis that adolescent binge drinkers, but not lighter drinkers, would show signs of impairment on tasks of affective decision-making as measured by the Iowa Gambling Test (IGT), when compared to adolescents who never drank. We tested 207 10th grade adolescents in Chengdu City, China, using two versions of the IGT, the original and a variant, in which the reward/punishment contingencies were reversed. This enables one to distinguish among different possibilities of impaired decision-making, such as insensitivity to long-term consequences, or hypersensitivity to reward. Furthermore, we tested working memory capacity using the Self-ordered Pointing Test (SOPT). Paper and pencil questionnaires were used to assess drinking behaviors and school academic performance. Results indicated that relative to never-drinkers, adolescent binge drinkers, but not other (ever, past 30-day) drinkers, showed significantly lower net scores on the original version of the IGT especially in the latter trials. Furthermore, the profiles of behavioral performance from the original and variant versions of the IGT were consistent with a decision-making impairment attributed to hypersensitivity to reward. In addition, working memory and school academic performance revealed no differences between drinkers (at all levels) and never-drinkers. Logistic regression analysis showed that after controlling for demographic variables, working memory, and school academic performance, the IGT significantly predicted binge-drinking. These findings suggest that a "myopia" for future consequences linked to hypersensitivity to reward is a key characteristic of adolescents with binge-drinking behavior, and that underlying neural mechanisms for this "myopia" for future consequences may serve as a predisposing factor that renders some adolescents more susceptible to future addictive behaviors.

  18. Effect of cooperative learning strategies on student verbal interactions and achievement during conceptual change instruction in 10th grade general science

    NASA Astrophysics Data System (ADS)

    Lonning, Robert A.

    This study evaluated the effects of cooperative learning on students' verbal interaction patterns and achievement in a conceptual change instructional model in secondary science. Current conceptual change instructional models recognize the importance of student-student verbal interactions, but lack specific strategies to encourage these interactions. Cooperative learning may provide the necessary strategies. Two sections of low-ability 10th-grade students were designated the experimental and control groups. Students in both sections received identical content instruction on the particle model of matter using conceptual change teaching strategies. Students worked in teacher-assigned small groups on in-class assignments. The experimental section used cooperative learning strategies involving instruction in collaborative skills and group evaluation of assignments. The control section received no collaborative skills training and students were evaluated individually on group work. Gains on achievement were assessed using pre- and posttreatment administrations of an investigator-designed short-answer essay test. The assessment strategies used in this study represent an attempt to measure conceptual change. Achievement was related to students' ability to correctly use appropriate scientific explanations of events and phenomena and to discard use of naive conceptions. Verbal interaction patterns of students working in groups were recorded on videotape and analyzed using an investigator-designed verbal interaction scheme. The targeted verbalizations used in the interaction scheme were derived from the social learning theories of Piaget and Vygotsky. It was found that students using cooperative learning strategies showed greater achievement gains as defined above and made greater use of specific verbal patterns believed to be related to increased learning. The results of the study demonstrated that cooperative learning strategies enhance conceptual change instruction. More

  19. REPORT FOR COMMERCIAL GRADE NICKEL CHARACTERIZATION AND BENCHMARKING

    SciTech Connect

    2012-12-20

    Oak Ridge Associated Universities (ORAU), under the Oak Ridge Institute for Science and Education (ORISE) contract, has completed the collection, sample analysis, and review of analytical results to benchmark the concentrations of gross alpha-emitting radionuclides, gross beta-emitting radionuclides, and technetium-99 in commercial grade nickel. This report presents methods, change management, observations, and statistical analysis of materials procured from sellers representing nine countries on four continents. The data suggest there is a low probability of detecting alpha- and beta-emitting radionuclides in commercial nickel. Technetium-99 was not detected in any samples, thus suggesting it is not present in commercial nickel.

  20. Language Arts Curriculum Framework: Sample Grade Level Benchmarks, Grades 5-8.

    ERIC Educational Resources Information Center

    Arkansas State Dept. of Education, Little Rock.

    Based on the 1998 Arkansas English Language Arts Frameworks, this framework lists benchmarks for grades five through eight in writing; reading; and listening, speaking, and viewing. The writing section's stated standards are to help students employ a wide range of strategies as they write; use different writing process elements appropriately to…

  1. Trends in Substance Use among 6th-to 10th-Grade Students from 1998 to 2010: Findings from a National Probability Study

    ERIC Educational Resources Information Center

    Brooks-Russell, Ashley; Farhat, Tilda; Haynie, Denise; Simons-Morton, Bruce

    2014-01-01

    Of the handful of national studies tracking trends in adolescent substance use in the United States, only the Health Behavior in School-Aged Children (HBSC) study collects data from 6th through 10th graders. The purpose of this study was to examine trends from 1998 to 2010 (four time points) in the prevalence of tobacco, alcohol, and marijuana use…

  2. Modeles de rendement langagier: Francais, 10e annee. Francais langue premiere (Models of Linguistic Production: French, 10th Grade. French as a Native Language).

    ERIC Educational Resources Information Center

    Alberta Learning, Edmonton. Direction de l'education francaise.

    Aligned with its 1998 standards for first- and second-language learning, Alberta Learning has published lesson plans that aim for a closer relationship between learning and evaluation. Each volume in this series presents a specific task for students that involves planning, carrying out, and evaluating their work. The task for tenth grade students…

  3. The Impact of Internet Virtual Physics Laboratory Instruction on the Achievement in Physics, Science Process Skills and Computer Attitudes of 10th-Grade Students

    ERIC Educational Resources Information Center

    Yang, Kun-Yuan; Heh, Jia-Sheng

    2007-01-01

    The purpose of this study was to investigate and compare the impact of Internet Virtual Physics Laboratory (IVPL) instruction with traditional laboratory instruction in physics academic achievement, performance of science process skills, and computer attitudes of tenth grade students. One-hundred and fifty students from four classes at one private…

  4. The Basic Program of Vocational Agriculture in Louisiana. Ag I and Ag II (9th and 10th Grades). Volume III. Bulletin 1690-III.

    ERIC Educational Resources Information Center

    Louisiana State Dept. of Education, Baton Rouge. Div. of Vocational Education.

    This curriculum guide, the third volume of the series, outlines the basic program of vocational agriculture for Louisiana students in the ninth and tenth grades. Covered in the five units on plant science are growth processes of plants, cultural practices for plants, insects affecting plants, seed and plant selection, and diseases that affect…

  5. Interobserver agreement for Polyomavirus nephropathy grading in renal allografts using the working proposal from the 10th Banff Conference on Allograft Pathology.

    PubMed

    Sar, Aylin; Worawichawong, Suchin; Benediktsson, Hallgrimur; Zhang, Jianguo; Yilmaz, Serdar; Trpkov, Kiril

    2011-12-01

    A classification schema for grading Polyomavirus nephropathy was proposed at the 2009 Banff allograft meeting. The schema included 3 stages of Polyomavirus nephropathy: early (stage A), florid (stage B), and late sclerosing (stage C). Grading categories for histologic viral load levels were also proposed. To examine the applicability and the interobserver agreement of the proposed Polyomavirus nephropathy grading schema, we evaluated 24 renal allograft biopsies with confirmed Polyomavirus nephropathy by histology and SV40. Four renal pathologists independently scored the Polyomavirus nephropathy stage (A, B, or C), without knowledge of the clinical history. Viral load was scored as a percent of tubules exhibiting viral replication, using either a 3-tier viral load score (1: ≤1%; 2: >1%-10%; 3: >10%) or a 4-tier score (1: ≤1%; 2: >1%-≤5%; 3: >5%-15%; 4: >15%). The κ score for the Polyomavirus nephropathy stage was 0.47 (95% confidence interval, 0.35-0.60; P < .001). There was a substantial agreement using both the 3-tier and the 4-tier scoring for the viral load (Kendall concordance coefficients, 0.72 and 0.76, respectively; P < .001 for both). A better complete agreement was found using the 3-tier viral load score. In this first attempt to evaluate the interobserver reproducibility of the proposed Polyomavirus nephropathy classifying schema, we demonstrated moderate κ agreement in assessing the Polyomavirus nephropathy stage and a substantial agreement in scoring the viral load level. The proposed grading schema can be applied in routine allograft biopsy practice for grading the Polyomavirus nephropathy stage and the viral load level.

  6. Trends in Bullying, Physical Fighting, and Weapon Carrying Among 6th- Through 10th-Grade Students From 1998 to 2010: Findings From a National Study

    PubMed Central

    Brooks-Russell, Ashley; Wang, Jing; Iannotti, Ronald J.

    2014-01-01

    Objectives. We examined trends from 1998 to 2010 in bullying, bullying victimization, physical fighting, and weapon carrying and variations by gender, grade level, and race/ethnicity among US adolescents. Methods. The Health Behavior in School-Aged Children surveys of nationally representative samples of students in grades 6 through 10 were completed in 1998 (n = 15 686), 2002 (n = 14 818), 2006 (n = 9229), and 2010 (n = 10 926). We assessed frequency of bullying behaviors, physical fighting, and weapon carrying as well as weapon type and subtypes of bullying. We conducted logistic regression analyses, accounting for the complex sampling design, to identify trends and variations by demographic factors. Results. Bullying perpetration, bullying victimization, and physical fighting declined from 1998 to 2010. Weapon carrying increased for White students only. Declines in bullying perpetration and victimization were greater for boys than for girls. Declines in bullying perpetration and physical fighting were greater for middle-school students than for high-school students. Conclusions. Declines in most violent behaviors are encouraging; however, lack of decline in weapon carrying merits further attention. PMID:24825213

  7. The Impact of Internet Virtual Physics Laboratory Instruction on the Achievement in Physics, Science Process Skills and Computer Attitudes of 10th-Grade Students

    NASA Astrophysics Data System (ADS)

    Yang, Kun-Yuan; Heh, Jia-Sheng

    2007-10-01

    The purpose of this study was to investigate and compare the impact of Internet Virtual Physics Laboratory (IVPL) instruction with traditional laboratory instruction in physics academic achievement, performance of science process skills, and computer attitudes of tenth grade students. One-hundred and fifty students from four classes at one private senior high school in Taoyuan Country, Taiwan, R.O.C. were sampled. All four classes contained 75 students who were equally divided into an experimental group and a control group. The pre-test results indicated that the students' entry-level physics academic achievement, science process skills, and computer attitudes were equal for both groups. On the post-test, the experimental group achieved significantly higher mean scores in physics academic achievement and science process skills. There was no significant difference in computer attitudes between the groups. We concluded that the IVPL had potential to help tenth graders improve their physics academic achievement and science process skills.

  8. Rates of Substance Use of American Indian Students in 8th, 10th, and 12th Grades Living on or Near Reservations: Update, 2009–2012

    PubMed Central

    Harness, Susan D.; Swaim, Randall C.; Beauvais, Fred

    2014-01-01

    Objectives Understanding the similarities and differences between substance use rates for American Indian (AI) young people and young people nationally can better inform prevention and treatment efforts. We compared substance use rates for a large sample of AI students living on or near reservations for the years 2009–2012 with national prevalence rates from Monitoring the Future (MTF). Methods We identified and sampled schools on or near AI reservations by region; 1,399 students in sampled schools were administered the American Drug and Alcohol Survey. We computed lifetime, annual, and last-month prevalence measures by grade and compared them with MTF results for the same time period. Results Prevalence rates for AI students were significantly higher than national rates for nearly all substances, especially for 8th graders. Rates of marijuana use were very high, with lifetime use higher than 50% for all grade groups. Other findings of interest included higher binge drinking rates and OxyContin® use for AI students. Conclusions The results from this study demonstrate that adolescent substance use is still a major problem among reservation-based AI adolescent students, especially 8th graders, where prevalence rates were sometimes dramatically higher than MTF rates. Given the high rates of substance use-related problems on reservations, such as academic failure, delinquency, violent criminal behavior, suicidality, and alcohol-related mortality, the costs to members of this population and to society will continue to be much too high until a comprehensive understanding of the root causes of substance use are established. PMID:24587550

  9. Criterion-related validity of curriculum-based measurement in writing with narrative and expository prompts relative to passage copying speed in 10th grade students.

    PubMed

    Mercer, Sterett H; Martínez, Rebecca S; Faust, Dennis; Mitchell, Rachel R

    2012-06-01

    We investigated the criterion-related validity of four indicators of curriculum-based measurement in writing (WCBM) when using expository versus narrative writing prompts as compared to the validity of passage copying speed. Specifically, we compared criterion-related validity of production-dependent (total words written, correct word sequences), accurate-production (correct minus incorrect word sequences [CIWS]), and production-independent (percent of correct word sequences [%CWS]) scoring methods on narrative and expository writing probes in relation to a state-mandated writing assessment. Participants included all tenth grade students (N=163) from a rural high school in the Midwest. Results indicated that the more complex indicators of writing, %CWS (when taking into account passage copying speed), and CIWS (when passage copying speed was not considered) on narrative probes explained the greatest amount of variance in the criterion measure. None of the WCBM indicators, alone or in combination with passage copying speed, explained more than 25% of the variance in the state writing assessment, suggesting that WCBM may have limitations as a universal screening measure for high school students.

  10. Analyses of Weapons-Grade MOX VVER-1000 Neutronics Benchmarks: Pin-Cell Calculations with SCALE/SAS2H

    SciTech Connect

    Ellis, R.J.

    2001-01-11

    A series of unit pin-cell benchmark problems have been analyzed related to irradiation of mixed oxide fuel in VVER-1000s (water-water energetic reactors). One-dimensional, discrete-ordinates eigenvalue calculations of these benchmarks were performed at ORNL using the SAS2H control sequence module of the SCALE-4.3 computational code system, as part of the Fissile Materials Disposition Program (FMDP) of the US DOE. Calculations were also performed using the SCALE module CSAS to confirm the results. The 238 neutron energy group SCALE nuclear data library 238GROUPNDF5 (based on ENDF/B-V) was used for all calculations. The VVER-1000 pin-cell benchmark cases modeled with SAS2H included zero-burnup calculations for eight fuel material variants (from LEU UO{sub 2} to weapons-grade MOX) at five different reactor states, and three fuel depletion cases up to high burnup. Results of the SAS2H analyses of the VVER-1000 neutronics benchmarks are presented in this report. Good general agreement was obtained between the SAS2H results, the ORNL results using HELIOS-1.4 with ENDF/B-VI nuclear data, and the results from several Russian benchmark studies using the codes TVS-M, MCU-RFFI/A, and WIMS-ABBN. This SAS2H benchmark study is useful for the verification of HELIOS calculations, the HELIOS code being the principal computational tool at ORNL for physics studies of assembly design for weapons-grade plutonium disposition in Russian reactors.

  11. 10th World Earthquake Engineering Conference

    NASA Astrophysics Data System (ADS)

    Ranguelov, Boyko; Housner, George

    The 10th World Conference on Earthquake Engineering (10WCEE) took place from July 19 to 24 in Madrid, Spain. More than 1500 participants from 51 countries attended the conference. All aspects of earthquake engineering were covered and a worldwide update of modern research and practice, as well as future directions in the field, was provided through reports, papers, posters, two keynote lectures, ten state-ofthe-art reports, and eleven special theme sessions.

  12. PREFACE: 10th Joint Conference on Chemistry

    NASA Astrophysics Data System (ADS)

    2016-02-01

    The 10th Joint Conference on Chemistry is an international conference organized by 4 chemistry departments of 4 universities in central Java, Indonesia. The universities are Sebelas Maret University, Diponegoro University, Semarang State University and Soedirman University. The venue was at Solo, Indonesia, at September 8-9, 2015. The total conference participants are 133 including the invited speakers. The conference emphasized the multidisciplinary chemical issue and impact of today's sustainable chemistry which covering the following topics: • Material innovation for sustainable goals • Development of renewable and sustainable energy based on chemistry • New drug design, experimental and theoretical methods • Green synthesis and characterization of material (from molecule to functionalized materials) • Catalysis as core technology in industry • Natural product isolation and optimization

  13. 10th Arnual Great Moonbuggy Race

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Students from across the United States and as far away as Puerto Rico came to Huntsville, Alabama for the 10th annual Great Moonbuggy Race at the U.S. Space Rocket Center. Sixty-eight teams, representing high schools and colleges from all over the United States, and Puerto Rico, raced human powered vehicles over a lunar-like terrain. Vehicles powered by two team members, one male and one female, raced one at a time over a half-mile obstacle course of simulated moonscape terrain. The competition is inspired by development, some 30 years ago, of the Lunar Roving Vehicle (LRV), a program managed by the Marshall Space Flight Center. The LRV team had to design a compact, lightweight, all-terrain vehicle that could be transported to the Moon in the small Apollo spacecraft. The Great Moonbuggy Race challenges students to design and build a human powered vehicle so they will learn how to deal with real-world engineering problems similar to those faced by the actual NASA LRV team. In this photograph, Team No. 1 from North Dakota State University in Fargo conquers one of several obstacles on their way to victory. The team captured first place honors in the college level competition.

  14. 10th Arnual Great Moonbuggy Race

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Students from across the United States and as far away as Puerto Rico came to Huntsville, Alabama for the 10th annual Great Moonbuggy Race at the U.S. Space Rocket Center. Sixty-eight teams, representing high schools and colleges from all over the United States, and Puerto Rico, raced human powered vehicles over a lunar-like terrain. Vehicles powered by two team members, one male and one female, raced one at a time over a half-mile obstacle course of simulated moonscape terrain. The competition is inspired by development, some 30 years ago, of the Lunar Roving Vehicle (LRV), a program managed by the Marshall Space Flight Center. The LRV team had to design a compact, lightweight, all-terrain vehicle that could be transported to the Moon in the small Apollo spacecraft. The Great Moonbuggy Race challenges students to design and build a human powered vehicle so they will learn how to deal with real-world engineering problems similar to those faced by the actual NASA LRV team. In this photograph, racers from C-1 High School in Lafayette County, Missouri, get ready to tackle the course. The team pedaled its way to victory over 29 other teams to take first place honors. It was the second year in a row a team from the school has placed first in the high school division. (NASA/MSFC)

  15. Interracial Best Friendships: Relationship with 10th Graders' Academic Achievement Level

    ERIC Educational Resources Information Center

    Newgent, Rebecca A.; Lee, Sang Min; Daniel, Ashley F.

    2007-01-01

    The authors examined the relationships between interracial best friendships and 10th-grade students' academic achievement. The analysis consisted of data from 13,134 participants in the ELS:2002 database. The results indicated that interracial best friendships for minority students (African Americans, Latino Americans, Asian Americans, and…

  16. PREFACE: 10th International LISA Symposium

    NASA Astrophysics Data System (ADS)

    Ciani, Giacomo; Conklin, John W.; Mueller, Guido

    2015-05-01

    large mission in Europe, and a potential comprehensive technology development program followed by a number one selection in the 2020 Decadal Survey in the U.S. The selection of L2 was combined with the selection of L3 and the newly formed eLISA consortium submitted an updated NGO concept under the name eLISA, or Evolved LISA, to the competition. It was widely believed that the launch date of 2028 for L2, would be seen by the selection committee as providing sufficient time to retire any remaining technological risks for LISA. However, the committee selected the 'Hot and Energetic Universe', an X-ray mission, as the science theme for L2 and the 'Gravitational Universe', the eLISA science theme, for L3. Although very disappointed, it was not a surprising decision. LPF did experience further delays just prior to and during the selection process, which may have influenced the decision. The strong technology program in the U.S. never materialized because WFIRST, the highest priority large mission in the 2010 Decadal following JWST, not only moved ahead but was also up-scoped significantly. The L3 selection, the WFIRST schedule, and the missing comprehensive technology development in the U.S. will make a launch of a GW mission in the 2020s very difficult. Although many in the LISA community, including ourselves, did not want to accept this harsh reality, this was the situation just prior to the 10th LISA symposium. However, despite all of this, the LISA team is now hopeful! In May of 2014 the LISA community gathered at the University of Florida in Gainesville to discuss progress in both the science and technology of LISA. The most notable plenary and contributed sessions included updates on the progress of LISA Pathfinder, which remains on track for launch in the second half of 2015(!), the science of LISA which ranges from super-massive black hole mergers and cosmology to the study of compact binaries within our own galaxy, and updates from other programs that share some of

  17. The Effect of Using the "SQP2RS via WTL" Strategy through Science Context to 10th Graders' Reading Comprehension in English in Palestine

    ERIC Educational Resources Information Center

    Qabaja, Ziad Mohammed Mahmoud; Nafi', Jamal Subhi Ismail; Abu-Nimah, Maisa' Issa Khalil

    2016-01-01

    The study aimed at investigating the effect of using the "SQP2RS via WTL" strategy through science context to 10th graders' reading comprehension in English in Bethlehem district in Palestine. The study has been applied on a purposeful sample of 10th grade students at public schools in Bethlehem district in the academic year 2015/2016.…

  18. EDITORIAL: STAM celebrates its 10th anniversary STAM celebrates its 10th anniversary

    NASA Astrophysics Data System (ADS)

    Ushioda, Sukekatsu

    2010-02-01

    I would like to extend my warmest greetings to the readers and staff of Science and Technology of Advanced Materials (STAM), on the occasion of its 10th anniversary. Launched in 2000, STAM marks this year an important milestone in its history. This is a great occasion to celebrate. STAM was founded by Tsuyoshi Masumoto in collaboration with Teruo Kishi and Toyonobu Yoshida as a world-class resource for the materials science community. It was initially supported by several materials research societies and was published as a regular peer-reviewed journal. Significant changes occurred in 2008, when the National Institute for Materials Science (NIMS) became solely responsible for all the costs of maintaining the journal. STAM was transformed into an open-access journal published by NIMS in partnership with IOP Publishing. As a result, the publication charges were waived and the entire STAM content, including all back issues, became freely accessible through the IOP Publishing website. The transition has made STAM more competitive and successful in global publication communities, with innovative ideas and approaches. The journal has also changed its publication strategy, aiming to publish a limited number of high-quality articles covering the frontiers of materials science. Special emphasis has been placed on reviews and focus issues, providing recent summaries of hot materials science topics. Publication has become electronic only; however, selected issues are printed and freely distributed at major international scientific events. The Editorial Board has been expanded to include leading experts from all over the world and, together with the Editorial Office, the board members are doing their best to transform STAM into a leading materials science journal. These efforts are paying off, as shown by the rapidly increasing number of article downloads and citations in 2009. I believe that the STAM audience can not only deepen their knowledge in their own specialties but

  19. Nineth Rib Syndrome after 10th Rib Resection

    PubMed Central

    Yu, Hyun Jeong; Jeong, Yu Sub; Lee, Dong Hoon

    2016-01-01

    The 12th rib syndrome is a disease that causes pain between the upper abdomen and the lower chest. It is assumed that the impinging on the nerves between the ribs causes pain in the lower chest, upper abdomen, and flank. A 74-year-old female patient visited a pain clinic complaining of pain in her back, and left chest wall at a 7 on the 0-10 Numeric Rating scale (NRS). She had a lateral fixation at T12-L2, 6 years earlier. After the operation, she had multiple osteoporotic compression fractures. When the spine was bent, the patient complained about a sharp pain in the left mid-axillary line and radiating pain toward the abdomen. On physical examination, the 10th rib was not felt, and an image of the rib-cage confirmed that the left 10th rib was severed. When applying pressure from the legs to the 9th rib of the patient, pain was reproduced. Therefore, the patient was diagnosed with 9th rib syndrome, and ultrasound-guided 9th and 10th intercostal nerve blocks were performed around the tips of the severed 10th rib. In addition, local anesthetics with triamcinolone were administered into the muscles beneath the 9th rib at the point of the greatest tenderness. The patient's pain was reduced to NRS 2 point. In this case, it is suspected that the patient had a partial resection of the left 10th rib in the past, and subsequent compression fractures at T8 and T9 led to the deformation of the rib cage, causing the tip of the remaining 10th rib to impinge on the 9th intercostal nerves, causing pain. PMID:27413484

  20. Factors Related to Alcohol Use among 6th through 10th Graders: The Sarasota County Demonstration Project

    ERIC Educational Resources Information Center

    Eaton, Danice K.; Forthofer, Melinda S.; Zapata, Lauren B.; Brown, Kelli R. McCormack; Bryant, Carol A.; Reynolds, Sherri T.; McDermott, Robert J.

    2004-01-01

    Alcohol consumption by youth can produce negative health outcomes. This study identified correlates of lifetime alcohol use, recent alcohol use, and binge drinking among youth in sixth through 10th grade (n = 2,004) in Sarasota County, Fla. Results from a closed-ended, quantitative survey acknowledged a range of personal, social and environmental…

  1. Byzantine psychosomatic medicine (10th- 15th century).

    PubMed

    Eftychiadis, A C

    1999-01-01

    Original elements of the psychosomatic medicine are examined by the most important byzantine physicians and medico-philosophers during the 10th -15th centuries. These topics concern the psycosomatic unity of the human personality, the psychosomatic disturbances, diseases and interactions, organic diseases, which cause psychical disorders, psychical pathological reactions, which result in somatic diseases, the psychology of the depth of the soul, the psychosomatic pathogenetic reasons of psychiatric and neurological diseases and suicide, the influence of witchcraft on psychosomatic affections, maniac and demoniac patients. The psychosomatic treatment has a holistic preventive and curative character and encloses sanitary and dietary measures, physiotherapy, curative bathing, strong purgation, pharmaceutical preparations proportional to the disease, religious disposition, psychoanalysis and psychotherapy with dialogue and the contribution of the divine factor. The late byzantine medical science contributed mainly to the progress of the psychosomatic medicine and therapeutics. The saint woman physician Hermione (1st -2nd cent.) is considered as the protectress of psychosomatic medicine.

  2. The Comparative Toxicogenomics Database's 10th year anniversary: update 2015

    PubMed Central

    Davis, Allan Peter; Grondin, Cynthia J.; Lennon-Hopkins, Kelley; Saraceni-Richards, Cynthia; Sciaky, Daniela; King, Benjamin L.; Wiegers, Thomas C.; Mattingly, Carolyn J.

    2015-01-01

    Ten years ago, the Comparative Toxicogenomics Database (CTD; http://ctdbase.org/) was developed out of a need to formalize, harmonize and centralize the information on numerous genes and proteins responding to environmental toxic agents across diverse species. CTD's initial approach was to facilitate comparisons of nucleotide and protein sequences of toxicologically significant genes by curating these sequences and electronically annotating them with chemical terms from their associated references. Since then, however, CTD has vastly expanded its scope to robustly represent a triad of chemical–gene, chemical–disease and gene–disease interactions that are manually curated from the scientific literature by professional biocurators using controlled vocabularies, ontologies and structured notation. Today, CTD includes 24 million toxicogenomic connections relating chemicals/drugs, genes/proteins, diseases, taxa, phenotypes, Gene Ontology annotations, pathways and interaction modules. In this 10th year anniversary update, we outline the evolution of CTD, including our increased data content, new ‘Pathway View’ visualization tool, enhanced curation practices, pilot chemical–phenotype results and impending exposure data set. The prototype database originally described in our first report has transformed into a sophisticated resource used actively today to help scientists develop and test hypotheses about the etiologies of environmentally influenced diseases. PMID:25326323

  3. PREFACE: ISEC 2005: The 10th International Superconductive Electronics Conference

    NASA Astrophysics Data System (ADS)

    Rogalla, Horst

    2006-05-01

    The 10th International Superconductive Electronics Conference took place in Noordwijkerhout in the Netherlands, 5-9 September 2005, not far from the birthplace of superconductivity in Leiden nearly 100 years ago. There have been many reasons to celebrate the 10th ISEC: not only was it the 20th anniversary, but also the achievements since the first conference in Tokyo in 1987 are tremendous. We have seen whole new groups of superconductive materials come into play, such as oxide superconductors with maximum Tc in excess of 100 K, carbon nanotubes, as well as the realization of new digital concepts from saturation logic to the ultra-fast RSFQ-logic. We have learned that superconductors not only show s-wave symmetries in the spatial arrangement of the order parameter, but also that d-wave dependence in oxide superconductors is now well accepted and can even be successfully applied to digital circuits. We are now used to operating SQUIDs in liquid nitrogen; fT sensitivity of SQUID magnetometers is not surprising anymore and can even be reached with oxide-superconductor based SQUIDs. Even frequency discriminating wide-band single photon detection with superconductive devices, and Josephson voltage standards with tens of thousands of junctions, nowadays belong to the daily life of advanced laboratories. ISEC has played a very important role in this development. The first conferences were held in 1987 and 1989 in Tokyo, and subsequently took place in Glasgow (UK), Boulder (USA), Nagoya (Japan), Berlin (Germany), Berkeley (USA), Osaka (Japan), Sydney (Australia), and in 2005 for the first time in the Netherlands. These conferences have provided platforms for the presentation of the research and development results of this community and for the vivid discussion of achievements and strategies for the further development of superconductive electronics. The 10th conference has played a very important role in this context. The results in laboratories show great potential and

  4. Factors related to alcohol use among 6th through 10th graders: the Sarasota County Demonstration Project.

    PubMed

    Eaton, Danice K; Forthofer, Melinda S; Zapata, Lauren B; Brown, Kelli R; Bryant, Carol A; Reynolds, Sherri T; McDermott, Robert J

    2004-03-01

    Alcohol consumption by youth can produce negative health outcomes. This study identified correlates of lifetime alcohol use, recent alcohol use, and binge drinking among youth in sixth through 10th grade (n = 2,004) in Sarasota County, Fla. Results from a closed-ended, quantitative survey acknowledged a range of personal, social, and environmental influences. Breadth of these influences supports a need for multifaceted, community-based interventions for effective prevention of youth alcohol use. This study was unique because it represents population-specific research in which community partners are using the findings to develop community-specific social marketing interventions to prevent underage drinking and promote alternative behaviors.

  5. Progression in Complexity: Contextualizing Sustainable Marine Resources Management in a 10th Grade Classroom

    ERIC Educational Resources Information Center

    Bravo-Torija, Beatriz; Jimenez-Aleixandre, Maria-Pilar

    2012-01-01

    Sustainable management of marine resources raises great challenges. Working with this socio-scientific issue in the classroom requires students to apply complex models about energy flow and trophic pyramids in order to understand that food chains represent transfer of energy, to construct meanings for sustainable resources management through…

  6. Food Services and Hospitality for 10th, 11th, and 12th Grades. Course Outline.

    ERIC Educational Resources Information Center

    Bucks County Technical School, Fairless Hills, PA.

    The outline describes the food services and hospitality course offered to senior high school students at the Bucks County Technical School. Specifically, the course seeks to provide students with a workable knowledge of food services and foster in them a sense of personal pride for quality workmanship. In addition to a statement of the philosophy…

  7. General Shop Competencies in Vocational Agriculture for 9th and 10th Grade Classes.

    ERIC Educational Resources Information Center

    Novotny, Ronald; And Others

    The document presents unit plans which offer list of experiences and competencies to be learned for general shop occupations in vocational agriculture. The units include: (1) arc welding, (2) oxy-acetylene welding, (3) flat concrete, (4) concrete block, (5) lumber patterns and wood building materials, (6) metal fasteners, (7) wood adhesives, (8)…

  8. Interests of 5th through 10th Grade Students toward Human Biology

    ERIC Educational Resources Information Center

    Erten, Sinan

    2008-01-01

    This study investigated the middle and high school students' interests towards the subjects of human biology, specifically, "Human Health and Nutrition" and "Human Body and Organs." The study also investigated sources of their interests and factors that impact their interests, namely people that they interact and courses that…

  9. Using Diagrams versus Text for Spaced Restudy: Effects on Learning in 10th Grade Biology Classes

    ERIC Educational Resources Information Center

    Bergey, Bradley W.; Cromley, Jennifer G.; Kirchgessner, Mandy L.; Newcombe, Nora S.

    2015-01-01

    Background and Aim: Spaced restudy has been typically tested with written learning materials, but restudy with visual representations in actual classrooms is under-researched. We compared the effects of two spaced restudy interventions: A Diagram-Based Restudy (DBR) warm-up condition and a business-as-usual Text-Based Restudy (TBR) warm-up…

  10. Interests of 5th through 10th Grade Students Regarding Enviromental Protection Issues

    ERIC Educational Resources Information Center

    Erten, Sinan

    2015-01-01

    This study investigates the extent of interest among middle and high school students in environmental protection issues along with the sources of their interests and factors that impact their interests, namely people with whom they interact and courses that they take related to the environment, science and technology. In addition, it is confirmed…

  11. An analysis of women's ways of knowing in a 10th grade integrated science classroom

    NASA Astrophysics Data System (ADS)

    Kochheiser, Karen Lynn

    All students can learn science, but how they learn science may differ. This study is about learning science and its relationship to gender. Women need to develop and establish connections with the objects that they are learning and be able to establish a voice in a science classroom. Unfortunately, traditional science classrooms still view science as a male domain and tend to discourage women from pursuing higher levels of science or science related careers. The ways that women learn science are a very complex set of interactions. In order to describe these interactions, this study explored how women's ways of knowing are represented in a high school science classroom. Nine women from an enriched integrated biology and earth science class contributed to this study. The women contributed to this study by participating in individual and group interviews, questionnaires, journals, observations and participant review of the interviews. The ways that these women learn science were described in terms of Belenky, Clinchy, Goldberger, and Tarule's Women's Ways of Knowing: The Development of Self, Voice, and Mind (1997). The women's ways of learning in this classroom tended to be situational with the women fitting different categories of knowing depending on the situation. Most of the women demonstrated periods of time where they wanted to be heard or tried to establish a voice in the classroom. The study helps to provide a theory for how women make choices in their learning of science and the struggle to be successful in a male dominated discipline. The women participating in this study gained an awareness of how they learn science and how that can be used to make them even more successful in the classroom. The awareness of how women learn science will also be of great benefit to other teachers and educators as the work for science reform continues to make science a 'science for all'.

  12. Does STES-Oriented Science Education Promote 10th-Grade Students' Decision-Making Capability?

    ERIC Educational Resources Information Center

    Levy Nahum, Tami; Ben-Chaim, David; Azaiza, Ibtesam; Herskovitz, Orit; Zoller, Uri

    2010-01-01

    Today's society is continuously coping with sustainability-related complex issues in the Science-Technology-Environment-Society (STES) interfaces. In those contexts, the need and relevance of the development of students' higher-order cognitive skills (HOCS) such as question-asking, critical-thinking, problem-solving and decision-making…

  13. Research and Education: The Foundations for Rehabilitation Service Delivery--10th Annual National Rehabilitation Educators Conference April 6th-10th, 2010

    ERIC Educational Resources Information Center

    Chou, Chih Chin

    2010-01-01

    The theme of the 10th annual National Rehabilitation Educators conference emphasized research and teaching ideals in the areas of clinical supervision, evidence-based practice in rehabilitation, rehabilitation counseling process, effective rehabilitation counseling training strategies, accreditation and licensure, rehabilitation ethics, and…

  14. Limitations of Community College Benchmarking and Benchmarks

    ERIC Educational Resources Information Center

    Bers, Trudy H.

    2006-01-01

    This chapter distinguishes between benchmarks and benchmarking, describes a number of data and cultural limitations to benchmarking projects, and suggests that external demands for accountability are the dominant reason for growing interest in benchmarking among community colleges.

  15. 1. Historic American Buildings Survey Joseph Hill, Photographer August 10th, ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    1. Historic American Buildings Survey Joseph Hill, Photographer August 10th, 1936 (Copied from small photo taken by survey members) OLD APARTMENT HOUSE - Jansonist Colony, Old Apartment House, Main Street, Bishop Hill, Henry County, IL

  16. 16. NORTHEAST CORNER VIEW OF 10TH AND 11TH FLOOR WINDOWS. ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    16. NORTHEAST CORNER VIEW OF 10TH AND 11TH FLOOR WINDOWS. CORNER SHOWS THE DIAGONALLY FLUTED SPIRAL DESIGN OF THE RELIEF COLUMN. - Pacific Telephone & Telegraph Company Building, 1519 Franklin Street, Oakland, Alameda County, CA

  17. Predictors of intent to pursue a college health science education among high achieving minority 10(th) graders.

    PubMed

    Zebrak, Katarzyna A; Le, Daisy; Boekeloo, Bradley O; Wang, Min Qi

    Minority populations are underrepresented in fields of science, perhaps limiting scientific perspectives. Informed by recent studies using Social Cognitive Career Theory, this study examined whether three conceptual constructs: self-efficacy, perceived adult support, and perceptions of barriers, as well as several discrete and immutable variables, were associated with intent to pursue college science education in a sample (N = 134) of minority youth (70.1% female and 67.2% African American). A paper-and-pencil survey about pursuit of college science was administered to 10th graders with a B- or better grade point average from six high schools in an underserved community. Results indicated that the three conceptual constructs were bivariate correlates of intent to pursue college science education. Only perceived adult support and knowing whether a parent received college education were significant predictors in multivariate modeling. These results build on previous research and provide further insight into youth decision-making regarding pursuit of college science.

  18. Predictors of intent to pursue a college health science education among high achieving minority 10th graders

    PubMed Central

    Zebrak, Katarzyna A.; Le, Daisy; Boekeloo, Bradley O.; Wang, Min Qi

    2014-01-01

    Minority populations are underrepresented in fields of science, perhaps limiting scientific perspectives. Informed by recent studies using Social Cognitive Career Theory, this study examined whether three conceptual constructs: self-efficacy, perceived adult support, and perceptions of barriers, as well as several discrete and immutable variables, were associated with intent to pursue college science education in a sample (N = 134) of minority youth (70.1% female and 67.2% African American). A paper-and-pencil survey about pursuit of college science was administered to 10th graders with a B- or better grade point average from six high schools in an underserved community. Results indicated that the three conceptual constructs were bivariate correlates of intent to pursue college science education. Only perceived adult support and knowing whether a parent received college education were significant predictors in multivariate modeling. These results build on previous research and provide further insight into youth decision-making regarding pursuit of college science. PMID:25598654

  19. EPA presents award to Oregon 10th grader for work on marine oil spills

    EPA Pesticide Factsheets

    (Seattle-May 6, 2015) The U.S. Environmental Protection Agency, Region 10 is awarding the President's Environmental Youth Award to 10th grader Sahil Veeramoney for his development of a novel and efficient method to clean up marine oil spills. Veeramoney is

  20. MedlinePlus en español marks its 10th anniversary

    MedlinePlus

    ... medlineplus.gov/spanishanniversary.html MedlinePlus en español Marks its 10 th Anniversary To use the sharing features ... Spanish greatly expands NIH's ability to carry out its mission to communicate with the public.” MedlinePlus en ...

  1. A PAIR OF 10TH CAVALRY AMBULANCES, PARKED NEXT TO ONE ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    A PAIR OF 10TH CAVALRY AMBULANCES, PARKED NEXT TO ONE OF THE STABLE LABELED "M.D. 10." PHOTOGRAPH TAKEN CIRCA 1918 (FORT HUACHUCA HISTORICAL MUSEUM, PHOTOGRAPH 1918.00.00.135, PHOTOGRAPHER UNIDENTIFIED, CREATED BY AND PROPERTY OF THE UNITED STATES ARMY) - Fort Huachuca, Cavalry Stables, Clarkson Road, Sierra Vista, Cochise County, AZ

  2. County Data Book, 2000: Kentucky Kids Count. 10th Annual Edition.

    ERIC Educational Resources Information Center

    Albright, Danielle; Hall, Douglas; Mellick, Donna; Miller, Debra; Town, Jackie

    This 10th annual Kids Count data book reports on trends in the well-being of Kentucky's children. The statistical portrait is based on indicators in the areas of well being, child risk factors, and demography. The indicators are as follows: (1) healthy births, including birth weights and prenatal care; (2) maternal risk characteristics, including…

  3. The Relationship of Grade Span in 9th Grade to Math Achievement in High School

    ERIC Educational Resources Information Center

    West, John; Miller, Mary Lou; Myers, Jim; Norton, Timothy

    2015-01-01

    Purpose, Scope, and Method of Study: The purpose of this study was to determine if a correlation exists between grade span for ninth grade and gains in math achievement test scores in 10th grade and 12th grade. A quantitative, longitudinal, correlational research design was employed to investigate the research questions. The population was high…

  4. Making a Difference: Education at the 10th International Conference on Zebrafish Development and Genetics

    PubMed Central

    Liang, Jennifer O.; Pickart, Michael A.; Pierret, Chris; Tomasciewicz, Henry G.

    2012-01-01

    Abstract Scientists, educators, and students met at the 10th International Conference on Zebrafish Development and Genetics during the 2-day Education Workshop, chaired by Dr. Jennifer Liang and supported in part by the Genetics Society of America. The goal of the workshop was to share expertise, to discuss the challenges faced when using zebrafish in the classroom, and to articulate goals for expanding the impact of zebrafish in education. PMID:23244686

  5. From the corner of N. 10th St. and W. O'Neill ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    From the corner of N. 10th St. and W. O'Neill Ave. Looking west. Housing # 157-162 are on the right, building 156 is straight ahead, and buildings 153, 152, 116, and 115 are to the left. The golf course is directly west of these buildings. - Fitzsimons General Hospital, Bounded by East Colfax to south, Peoria Street to west, Denver City/County & Adams County Line to north, & U.S. Route 255 to east, Aurora, Adams County, CO

  6. Assessing Compensation Reform: Research in Support of the 10th Quadrennial Review of Military Compensation

    DTIC Science & Technology

    2008-01-01

    heterogeneous. Nested Logit Specification The active/reserve dynamic retention model describes individual behav- ior as a series of choices regarding...for error correlation between the reserve and civilian alternatives, we modify the model to a nested logit form for the reserve or civilian choice ...plex job-tenure decisions in circumstances in which current choices iv Assessing Compensation Reform in Support of the 10th QRMC affect future

  7. 14. CLOSEUP VIEW OF THE 10TH AND 11TH FLOOR WINDOWS. ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    14. CLOSE-UP VIEW OF THE 10TH AND 11TH FLOOR WINDOWS. WINDOWS HAVE WHITE TERRA COTTA SILLS, HEADS AND MULLIONS. ARCHES ARE OF TERRA COTTA INCLUDING ORNAMENTATION ABOVE THE 11TH FLOOR WINDOWS. CIRCULAR ORNAMENTATIONS BETWEEN ARCHES ARE TERRA COTTA PAINTED IN BRONZE COLOR. LOUVERS ON THE WINDOWS ARE NOT PART OF THE ORIGINAL DESIGN. THIS IS THE FRONT ELEVATION. - Pacific Telephone & Telegraph Company Building, 1519 Franklin Street, Oakland, Alameda County, CA

  8. Benchmarking in Student Affairs.

    ERIC Educational Resources Information Center

    Mosier, Robert E.; Schwarzmueller, Gary J.

    2002-01-01

    Discusses the use of benchmarking in student affairs, focusing on issues related to student housing. Provides examples of how benchmarking has influenced administrative practice at many institutions. (EV)

  9. Collaborating to Move Research Forward: Proceedings of the 10th Annual Bladder Cancer Think Tank.

    PubMed

    Kamat, Ashish M; Agarwal, Piyush; Bivalacqua, Trinity; Chisolm, Stephanie; Daneshmand, Sia; Doroshow, James H; Efstathiou, Jason A; Galsky, Matthew; Iyer, Gopa; Kassouf, Wassim; Shah, Jay; Taylor, John; Williams, Stephen B; Quale, Diane Zipursky; Rosenberg, Jonathan E

    2016-04-27

    The 10th Annual Bladder Cancer Think Tank was hosted by the Bladder Cancer Advocacy Network and brought together a multidisciplinary group of clinicians, researchers, representatives and Industry to advance bladder cancer research efforts. Think Tank expert panels, group discussions, and networking opportunities helped generate ideas and strengthen collaborations between researchers and physicians across disciplines and between institutions. Interactive panel discussions addressed a variety of timely issues: 1) data sharing, privacy and social media; 2) improving patient navigation through therapy; 3) promising developments in immunotherapy; 4) and moving bladder cancer research from bench to bedside. Lastly, early career researchers presented their bladder cancer studies and had opportunities to network with leading experts.

  10. [Contribution of the 10th International Classification of Diseases to pediatric and adolescent psychiatry].

    PubMed

    Vojtík, V

    1993-12-01

    The 10th revision of the classification of mental disorders and behavioural disorders is due to description of clinical symptoms and diagnostic criteria more accurate and enriches the activities of departments of child and adolescent psychiatry. Diagnostics, therapy and prevention profit not only from sections dealing with newly conceived disorders which begin in childhood and adolescence but also other sections where problems relating to children and adolescents are pointed out. The Czech translation inovates clinical pictures given in our textbook of Child psychiatry published in 1963 and thus replaces partly a hitherto not published modern Czech textbook of child and adolescent psychiatry.

  11. From the corner of E. Mccloskey Ave. and N. 10th ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    From the corner of E. Mccloskey Ave. and N. 10th St., looking west with building 135 (gas station) on the left. Beyond it is building 119 and to the right of 119 is the gable end of the north side of 120. Beyond and perpendicular to building 120 are 118 and 117. - Fitzsimons General Hospital, Bounded by East Colfax to south, Peoria Street to west, Denver City/County & Adams County Line to north, & U.S. Route 255 to east, Aurora, Adams County, CO

  12. Health Policy Basics: Implementation of the International Classification of Disease, 10th Revision.

    PubMed

    Outland, Brian; Newman, Mary M; William, Margo J

    2015-10-06

    The International Classification of Diseases (ICD) standardizes diagnostic codes into meaningful criteria to enable the storage and retrieval of information regarding patient care. Whereas other countries have been using ICD, 10th Revision (ICD-10), for years, the United States will transition from ICD, Ninth Revision, Clinical Modification (ICD-9-CM), to ICD-10, on 1 October 2015. This transition is one of the largest and most technically challenging changes that the medical community has experienced in the past several decades. This article outlines the implications of moving to ICD-10 and recommends resources to facilitate the transition.

  13. Epigenetics in autoimmune disorders: highlights of the 10th Sjögren's syndrome symposium.

    PubMed

    Lu, Qianjin; Renaudineau, Yves; Cha, Seunghee; Ilei, Gabor; Brooks, Wesley H; Selmi, Carlo; Tzioufas, Athanasios; Pers, Jacques-Olivier; Bombardieri, Stefano; Gershwin, M Eric; Gay, Steffen; Youinou, Pierre

    2010-07-01

    During the 10th International Symposium on Sjögren's Syndrome held in Brest, France, from October 1-3, 2009 (http://www.sjogrensymposium-brest2009.org), the creation of an international epigenetic autoimmune group has been proposed to establish gold standards and to launch collaborative studies. During this "epigenetics session", leading experts in the field presented and discussed the most recent developments of this topic in Sjögren's Syndrome research. The "Brest epigenetic task force" was born and has scheduled a meeting in Ljubljana, Slovenia during the 7th Autoimmunity congress in May 2010.The following is a report of that session.

  14. EDITORIAL: The 10th International Symposium on Measurement Technology and Intelligent Instruments (ISMTII 2011) The 10th International Symposium on Measurement Technology and Intelligent Instruments (ISMTII 2011)

    NASA Astrophysics Data System (ADS)

    Kim, Seung-Woo

    2012-05-01

    Measurement and instrumentation have long played an important role in production engineering, through supporting both the traditional field of manufacturing and the new field of micro/nanotechnology. Papers published in this special feature were selected and updated from those presented at The 10th International Symposium on Measurement Technology and Intelligent Instruments (ISMTII 2011) held at KAIST, Daejeon, South Korea, on 29 June-2 July 2011. ISMTII 2011 was organized by ICMI (The International Committee on Measurements and Instrumentation), Korean Society for Precision Engineering (KSPE), Japan Society for Precision Engineering (JSPE), Chinese Society for Measurement (CSM) and KAIST. The Symposium was also supported by the Korea BK21 Valufacture Institute of Mechanical Engineering at KAIST. A total of 225 papers, including four keynote papers, were presented at ISMTII 2011, covering a wide range of topics, including micro/nanometrology, precision measurement, online & in-process measurement, surface metrology, optical metrology & image processing, biomeasurement, sensor technology, intelligent measurement & instrumentation, uncertainty, traceability & calibration, and signal processing algorithms. The organizing members recommended publication of updated versions of some of the best ISMTII 2011 papers in this special feature of Measurement Science and Technology. As guest editor, I believe that this special feature presents the newest information on advances in measurement technology and intelligent instruments from basic research to applied systems for production engineering. I would like to thank all the authors for their great contributions to this special feature and the referees for their careful reviews of the papers. I would also like to express our thanks and appreciation to the publishing staff of MST for their dedicated efforts that have made this special feature possible.

  15. The Insertion of Local Wisdom into Instructional Materials of Bahasa Indonesia for 10th Grade Students in Senior High School

    ERIC Educational Resources Information Center

    Anggraini, Purwati; Kusniarti, Tuti

    2015-01-01

    This current study aimed at investigating Bahasa Indonesia textbooks with regards to local wisdom issues. The preliminary study was utilized as the basis for developing instructional materials of Bahasa Indonesia that are rich of characters. Bahasa Indonesia instructional materials containing local wisdoms not only equip students with broad…

  16. Students Left behind: Measuring 10th to 12th Grade Student Persistence Rates in Texas High Schools

    ERIC Educational Resources Information Center

    Domina, Thurston; Ghosh-Dastidar, Bonnie; Tienda, Marta

    2010-01-01

    The No Child Left Behind Act requires states to publish high school graduation rates for public schools; the U.S. Department of Education is currently considering a mandate to standardize high school graduation rate reporting. However, no consensus exists among researchers or policymakers about how to measure high school graduation rates. We use…

  17. Students Left Behind: Measuring 10(th) to 12(th) Grade Student Persistence Rates in Texas High Schools.

    PubMed

    Domina, Thurston; Ghosh-Dastidar, Bonnie; Tienda, Marta

    2010-06-01

    The No Child Left Behind Act requires states to publish high school graduation rates for public schools and the U.S. Department of Education is currently considering a mandate to standardize high school graduation rate reporting. However, no consensus exists among researchers or policy-makers about how to measure high school graduation rates. In this paper, we use longitudinal data tracking a cohort of students at 82 Texas public high schools to assess the accuracy and precision of three widely-used high school graduation rate measures: Texas's official graduation rates, and two competing estimates based on publicly available enrollment data from the Common Core of Data. Our analyses show that these widely-used approaches yield inaccurate and highly imprecise estimates of high school graduation and persistence rates. We propose several guidelines for using existing graduation and persistence rate data and argue that a national effort to track students as they progress through high school is essential to reconcile conflicting estimates.

  18. Students Left Behind: Measuring 10th to 12th Grade Student Persistence Rates in Texas High Schools

    PubMed Central

    Domina, Thurston; Ghosh-Dastidar, Bonnie; Tienda, Marta

    2012-01-01

    The No Child Left Behind Act requires states to publish high school graduation rates for public schools and the U.S. Department of Education is currently considering a mandate to standardize high school graduation rate reporting. However, no consensus exists among researchers or policy-makers about how to measure high school graduation rates. In this paper, we use longitudinal data tracking a cohort of students at 82 Texas public high schools to assess the accuracy and precision of three widely-used high school graduation rate measures: Texas’s official graduation rates, and two competing estimates based on publicly available enrollment data from the Common Core of Data. Our analyses show that these widely-used approaches yield inaccurate and highly imprecise estimates of high school graduation and persistence rates. We propose several guidelines for using existing graduation and persistence rate data and argue that a national effort to track students as they progress through high school is essential to reconcile conflicting estimates. PMID:23077375

  19. Successes with Reversing the Negative Student Attitudes Developed in Typical Biology Classes for 8th and 10th Grade Students

    ERIC Educational Resources Information Center

    Hacieminoglu, Esme; Ali, Mohamed Moustafa; Oztas, Fulya; Yager, Robert E.

    2016-01-01

    The purpose of this study is to compare changes in attitudes of students about their study of biology in the classes thought by five biology teachers who experienced an Iowa Chautauqua workshop with and two non-Chautauqua teachers who had no experience with any professional development program. The results indicated that there are significant…

  20. A Cross-Analysis of the Mathematics Teacher's Activity: An Example in a French 10th-Grade Class

    ERIC Educational Resources Information Center

    Robert, Aline; Rogalski, Janine

    2005-01-01

    The purpose of this paper is to contribute to the debate about how to tackle the issue of "the teacher in the teaching/learning process", and to propose a methodology for analysing the teacher's activity in the classroom, based on concepts used in the fields of the didactics of mathematics as well as in cognitive ergonomics. This…

  1. Latent Profiles of Reading and Language and Their Association with Standardized Reading Outcomes in Kindergarten through 10th Grade

    ERIC Educational Resources Information Center

    Foorman, Barbara R.; Petscher, Yaacov; Stanley, Christopher

    2016-01-01

    The idea of targeting reading instruction to profiles of students' strengths and weaknesses in component skills is central to teaching. However, these profiles are often based on unreliable descriptions of students' oral reading errors, text reading levels, or learning profiles. This research utilized latent profile analysis (LPA) to examine…

  2. New directions in research: report from the 10th International Conference on AIDS.

    PubMed Central

    Berger, P B

    1995-01-01

    Research findings presented at the 10th International Conference on AIDS, held in Yokohama, Japan, in August 1994, indicate that few advances have been made in standard antiretroviral therapy for HIV infection. The perinatal administration of AZT (zidovudine) was reported to reduce transmission of HIV from mother to child, and its use in combination with acyclovir appears to improve survival among patients with advanced disease. Other research has focused on asymptomatic patients with long-standing HIV infection. Their survival may be related to the activity of cell antiviral factor, a cytokine produced by CD8+ cells. In gene therapy research, one approach involved the genetic alteration of target cells to enable them to render the virus harmless. A second approach consisted of enhancing the function of CD8+ cells to allow them to compensate for dysfunctional CD4+ cells. The author believes that gene therapy may offer the greatest hope of an effective treatment for HIV infection. PMID:7780908

  3. 10th World IHEA and ECHE Joint Congress: health economics in the age of longevity.

    PubMed

    Jakovljevic, Mihajlo B; Getzen, Thomas E; Torbica, Aleksandra; Anegawa, Tomofumi

    2014-12-01

    The 10th consecutive World Health Economics conference was organized jointly by International Health Economics Association and European Conference on Health Economics Association and took place at The Trinity College, Dublin, Ireland in July 2014. It has attracted broad participation from the global professional community devoted to health economics teaching,research and policy applications. It has provided a forum for lively discussion on hot contemporary issues such as health expenditure projections, reimbursement regulations,health technology assessment, universal insurance coverage, demand and supply of hospital services, prosperity diseases, population aging and many others. The high-profile debate fostered by this meeting is likely to inspire further methodological advances worldwide and spreading of evidence-based policy practice from OECD towards emerging markets.

  4. Tuskegee Bioethics Center 10th anniversary presentation: "Commemorating 10 years: ethical perspectives on origin and destiny".

    PubMed

    Prograis, Lawrence J

    2010-08-01

    More than 70 years have passed since the beginning of the Public Health Service syphilis study in Tuskegee, Alabama, and it has been over a decade since President Bill Clinton formally apologized for it and held a ceremony for the Tuskegee study participants. The official launching of the Tuskegee University National Center for Bioethics in Research and Health Care took place two years after President Clinton's apology. How might we fittingly discuss the Center's 10th Anniversary and the topic 'Commemorating 10 Years: Ethical Perspectives on Origin and Destiny'? Over a decade ago, a series of writers, many of them African Americans, wrote a text entitled 'African-American Perspectives on Biomedical Ethics'; their text was partly responsible for a prolonged reflection by others to produce a subsequent work, 'African American Bioethics: Culture, Race and Identity'. What is the relationship between the discipline of bioethics and African American culture? This and related questions are explored in this commentary.

  5. Space Commerce 1994 Forum: The 10th National Space Symposium. Proceedings report

    NASA Astrophysics Data System (ADS)

    Lipskin, Beth Ann; Patterson, Sara; Aragon, Larry; Brescia, David A.; Flannery, Jack; Mossey, Roberty; Regan, Christopher; Steeby, Kurt; Suhr, Stacy; Zimkas, Chuck

    1994-04-01

    The theme of the 10th National Space Symposium was 'New Windows of Opportunity'. These proceedings cover the following: Business Trends in High Tech Commercialization; How to Succeed in Space Technology Business -- Making Dollars and Sense; Obstacles and Opportunities to Success in Technology Commercialization NASA's Commercial Technology Mission -- a New Way of Doing Business: Policy and Practices; Field Center Practices; Practices in Action -- A New Way: Implementation and Business Opportunities; Space Commerce Review; Windows of Opportunity; the International Space Station; Space Support Forum; Spacelift Update; Competitive Launch Capabilities; Supporting Life on Planet Earth; National Security Space Issues; NASA in the Balance; Earth and Space Observations -- Did We Have Cousins on Mars?; NASA: A New Vision for Science; and Space Technology Hall of Fame.

  6. Collaborating to Move Research Forward: Proceedings of the 10th Annual Bladder Cancer Think Tank

    PubMed Central

    Kamat, Ashish M.; Agarwal, Piyush; Bivalacqua, Trinity; Chisolm, Stephanie; Daneshmand, Sia; Doroshow, James H.; Efstathiou, Jason A.; Galsky, Matthew; Iyer, Gopa; Kassouf, Wassim; Shah, Jay; Taylor, John; Williams, Stephen B.; Quale, Diane Zipursky; Rosenberg, Jonathan E.

    2016-01-01

    The 10th Annual Bladder Cancer Think Tank was hosted by the Bladder Cancer Advocacy Network and brought together a multidisciplinary group of clinicians, researchers, representatives and Industry to advance bladder cancer research efforts. Think Tank expert panels, group discussions, and networking opportunities helped generate ideas and strengthen collaborations between researchers and physicians across disciplines and between institutions. Interactive panel discussions addressed a variety of timely issues: 1) data sharing, privacy and social media; 2) improving patient navigation through therapy; 3) promising developments in immunotherapy; 4) and moving bladder cancer research from bench to bedside. Lastly, early career researchers presented their bladder cancer studies and had opportunities to network with leading experts. PMID:27376139

  7. Space Commerce 1994 Forum: The 10th National Space Symposium. Proceedings report

    NASA Technical Reports Server (NTRS)

    Lipskin, Beth Ann (Editor); Patterson, Sara (Editor); Aragon, Larry (Editor); Brescia, David A. (Editor); Flannery, Jack (Editor); Mossey, Roberty (Editor); Regan, Christopher (Editor); Steeby, Kurt (Editor); Suhr, Stacy (Editor); Zimkas, Chuck (Editor)

    1994-01-01

    The theme of the 10th National Space Symposium was 'New Windows of Opportunity'. These proceedings cover the following: Business Trends in High Tech Commercialization; How to Succeed in Space Technology Business -- Making Dollars and Sense; Obstacles and Opportunities to Success in Technology Commercialization NASA's Commercial Technology Mission -- a New Way of Doing Business: Policy and Practices; Field Center Practices; Practices in Action -- A New Way: Implementation and Business Opportunities; Space Commerce Review; Windows of Opportunity; the International Space Station; Space Support Forum; Spacelift Update; Competitive Launch Capabilities; Supporting Life on Planet Earth; National Security Space Issues; NASA in the Balance; Earth and Space Observations -- Did We Have Cousins on Mars?; NASA: A New Vision for Science; and Space Technology Hall of Fame.

  8. The NAS parallel benchmarks

    NASA Technical Reports Server (NTRS)

    Bailey, David (Editor); Barton, John (Editor); Lasinski, Thomas (Editor); Simon, Horst (Editor)

    1993-01-01

    A new set of benchmarks was developed for the performance evaluation of highly parallel supercomputers. These benchmarks consist of a set of kernels, the 'Parallel Kernels,' and a simulated application benchmark. Together they mimic the computation and data movement characteristics of large scale computational fluid dynamics (CFD) applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification - all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.

  9. Report on the 10th International Conference of the Asian Clinical Oncology Society (ACOS 2012).

    PubMed

    Kim, Yeul Hong; Yang, Han-Kwang; Kim, Tae Won; Lee, Jung Shin; Seong, Jinsil; Lee, Woo Yong; Ahn, Yong Chan; Lim, Ho Yeong; Won, Jong-Ho; Park, Kyong Hwa; Cho, Kyung Sam

    2013-04-01

    The 10th International Conference of the Asian Clinical Oncology Society (ACOS 2012) in conjunction with the 38th Annual Meeting of the Korean Cancer Association, was held on June 13 to 15 (3 days) 2012 at COEX Convention and Exhibition Center in Seoul, Korea. ACOS has a 20-year history starting from the first conference in Osaka, Japan, which was chaired by Prof. Tetsuo Taguchi and the ACOS conferences have since been conducted in Asian countries every 2 years. Under the theme of "Work Together to Make a Difference for Cancer Therapy in Asia", the 10th ACOS was prepared to discuss various subjects through a high-quality academic program, exhibition, and social events. The ACOS 2012 Committee was composed of the ACOS Organizing Committee, Honorary Advisors, Local Advisors, and ACOS 2012 Organizing Committee. The comprehensive academic program had a total of 92 sessions (3 Plenary Lectures, 1 Award Lectures, 1 Memorial Lectures, 9 Special Lectures, 15 Symposia, 1 Debate & Summary Sessions, 1 Case Conferences, 19 Educational Lectures, 1 Research & Development Session, 18 Satellite Symposia, 9 Meet the Professors, 14 Oral Presentations) and a total 292 presentations were delivered throughout the entire program. Amongst Free Papers, 462 research papers (110 oral presentations and 352 poster presentations) were selected to be presented. This conference was the largest of all ACOS conferences in its scale with around 1,500 participants from 30 countries. Furthermore, despite strict new financial policies and requirements governing fundraising alongside global economic stagnation, a total of 14 companies participated as sponsors and an additional 35 companies purchased 76 exhibition booths. Lastly, the conference social events provided attendees with a variety of opportunities to experience and enjoy Korea's rich culture and traditions during the Opening Ceremony, Welcome Reception, Invitee Dinner, Banquet, and Closing Ceremony. Overall, ACOS 2012 reinforced and promoted

  10. Benchmarking: The New Tool.

    ERIC Educational Resources Information Center

    Stralser, Steven

    1995-01-01

    This article suggests that benchmarking, the process of comparing one's own operation with the very best, can be used to make improvements in colleges and universities. Six steps are outlined: determining what to benchmark, forming a team, discovering who to benchmark, collecting and analyzing data, using the data to redesign one's own operation,…

  11. Benchmarking for Higher Education.

    ERIC Educational Resources Information Center

    Jackson, Norman, Ed.; Lund, Helen, Ed.

    The chapters in this collection explore the concept of benchmarking as it is being used and developed in higher education (HE). Case studies and reviews show how universities in the United Kingdom are using benchmarking to aid in self-regulation and self-improvement. The chapters are: (1) "Introduction to Benchmarking" (Norman Jackson…

  12. The 8th-10 th January 2009 snowfalls: a case of Mediterranean warm advection event

    NASA Astrophysics Data System (ADS)

    Aguado, F.; Ayensa, E.; Barriga, M.; Del Hoyo, J.; Fernández, A.; Garrido, N.; Martín, A.; Martín, F.; Roa, I. Martínez, A.; Pascual, R.

    2009-09-01

    From 8 th to 10 th of January 2009, significant snowfalls were reported in many areas of the Iberian Peninsula and the Balearic Islands. This relevant event was very important from the meteorological and social impact point of views. The snow affected many zones, especially the regions of Madrid, Castilla & León and Castilla-La Mancha (Spanish central plateau) with the persistence and thickness of solid precipitation. Up to twenty-five centimetres of snow were reported in some places. On 9th of January the snowfalls caused great social and media impact due to the fact that they took place in the early hours in the Madrid metropolitan areas, affecting both air traffic and land transport. The "Madrid-Barajas" airport was closed and the city was collapsed during several hours. A study of this situation appears in the poster. The snowstorm was characterized by the previous irruption of an European continental polar air mass, that subsequently interacted with a wet and warm air mass of Mediterranean origin, all preceded by low level easterly flows. This type of snowfall is called "warm advection". These winter situations are very efficient from precipitation point of view, generating significant snowfalls and affecting a lot of areas.

  13. [Report of the 10th Annual Meeting of the Chinese society of Clinical Oncology].

    PubMed

    Cho, William Chi-Shing

    2008-03-01

    The 10th Annual Meeting of the Chinese Society of Clinical Oncology (CSCO) was held on 19-23 September 2007 in Harbin. The theme of this conference was "putting standard multidisciplinary cancer management into practice" and special reports of standard multidisciplinary management on various cancers were presented. Over 3 500 clinical oncologists and scientists participated in the 2007 CSCO Annual Meeting where more than ten international top experts were invited to exchange valuable experiences with the delegates. The programs consisted of Education Session, Satellite Symposium and Meet the Professor Session. The latest research results were presented as oral presentations and posters at the congress. Several hotspots were particularly highlighted in this report, including innovative radiotherapy and chemotherapy methods, researches on molecular targets and clinical trials of targeted therapy, such as endostatin, volociximab, cetuximab, bevacizumab and temozolomide. The remarkable research results of anti-cancer Chinese medicine, cancer screening and prognosis were also introduced. This article tries to call the attention to some hot topics in the program that are both new and noteworthy, and it may serve as a highlight of this important international cancer research meeting for clinical oncologists and scientists.

  14. FOREWORD: 10th Anglo-French Physical Acoustics Conference (AFPAC 2011)

    NASA Astrophysics Data System (ADS)

    Lhémery, Alain; Saffari, Nader

    2012-03-01

    The Anglo-French Physical Acoustics Conference (AFPAC) had its 10th annual meeting in Villa Clythia, Fréjus, France, from 19-21 January 2011. This series of meetings is a collaboration between the Physical Acoustics Group (PAG) of the Institute of Physics and the Groupe d'Acoustique Physique, Sous-marine et UltraSonore (GAPSUS) of the Société Française d'Acoustique. The conference has its loyal supporters whom we wish to thank. It is their loyalty that has made this conference a success. AFPAC alternates between the UK and France and its format has been designed to ensure that it remains a friendly meeting of very high scientific quality, offering a broad spectrum of subjects, welcoming young researchers and PhD students and giving them the opportunity to give their first presentations in an 'international' conference, but with limited pressure. For the third consecutive year AFPAC is followed by the publication of its proceedings in the form of 18 peer-reviewed papers, which cover the most recent research developments in the field of Physical Acoustics in the UK and France. Alain Lhémery CEA, France Nader Saffari UCL, United Kingdom

  15. 10th annual meeting of the Safety Pharmacology Society: an overview.

    PubMed

    Cavero, Icilio

    2011-03-01

    The 10th annual meeting of the Safety Pharmacology (SP) Society covered numerous topics of educational and practical research interest. Biopolymers - the theme of the keynote address - were presented as essential components of medical devices, diagnostic tools, biosensors, human tissue engineering and pharmaceutical formulations for optimized drug delivery. Toxicology and SP investigators - the topic of the Distinguished Service Award Lecture - were encouraged to collaborate in the development of SP technologies and protocols applicable to toxicology studies. Pharmaceutical companies, originally organizations bearing all risks for developing their portfolios, are increasingly moving towards fully integrated networks which outsource core activities (including SP studies) to large contract research organizations. Future nonclinical data are now expected to be of such high quality and predictability power that they may obviate the need for certain expensive and time-consuming clinical investigations. In this context, SP is called upon to extend its risk assessment purview to areas which currently are not systematically covered, such as drug-induced QRS interval prolongation, negative emotions and feelings (e.g., depression), and minor chronic cardiovascular and metabolic changes (e.g., as produced by drugs for type 2 diabetes) which can be responsible for delayed morbidity and mortality. The recently approved ICH S9 guidance relaxes the traditional regulatory SP package in order to accelerate the clinical access to anticancer drugs for patients with advanced malignancies. The novel FDA 'Animal Rule' guidance proposes that for clinical candidates with well-understood toxicities, marketing approval may be granted exclusively on efficacy data generated in animal studies as human clinical investigations for these types of drugs are either unfeasible or unethical. In conclusion, the core messages of this meeting are that SP should consistently operate according to the 'fit

  16. Report: Combustion Byproducts and Their Health Effects: Summary of the 10th International Congress

    PubMed Central

    Dellinger, Barry; D'Alessio, Antonio; D'Anna, Andrea; Ciajolo, Anna; Gullett, Brian; Henry, Heather; Keener, Mel; Lighty, JoAnn; Lomnicki, Slawomir; Lucas, Donald; Oberdörster, Günter; Pitea, Demetrio; Suk, William; Sarofim, Adel; Smith, Kirk R.; Stoeger, Tobias; Tolbert, Paige; Wyzga, Ron; Zimmermann, Ralf

    2008-01-01

    Abstract The 10th International Congress on Combustion Byproducts and their Health Effects was held in Ischia, Italy, from June 17–20, 2007. It is sponsored by the US NIEHS, NSF, Coalition for Responsible Waste Incineration (CRWI), and Electric Power Research Institute (EPRI). The congress focused on: the origin, characterization, and health impacts of combustion-generated fine and ultrafine particles; emissions of mercury and dioxins, and the development/application of novel analytical/diagnostic tools. The consensus of the discussion was that particle-associated organics, metals, and persistent free radicals (PFRs) produced by combustion sources are the likely source of the observed health impacts of airborne PM rather than simple physical irritation of the particles. Ultrafine particle-induced oxidative stress is a likely progenitor of the observed health impacts, but important biological and chemical details and possible catalytic cycles remain unresolved. Other key conclusions were: (1) In urban settings, 70% of airborne fine particles are a result of combustion emissions and 50% are due to primary emissions from combustion sources, (2) In addition to soot, combustion produces one, possibly two, classes of nanoparticles with mean diameters of ~10 nm and ~1 nm. (3) The most common metrics used to describe particle toxicity, viz. surface area, sulfate concentration, total carbon, and organic carbon, cannot fully explain observed health impacts, (4) Metals contained in combustion-generated ultrafine and fine particles mediate formation of toxic air pollutants such as PCDD/F and PFRs. (5) The combination of metal-containing nanoparticles, organic carbon compounds, and PFRs can lead to a cycle generating oxidative stress in exposed organisms. PMID:22476005

  17. Eruption Mechanism of the 10th Century Eruption in Baitoushan Volcano, China/North Korea

    NASA Astrophysics Data System (ADS)

    Shimano, T.; Miyamoto, T.; Nakagawa, M.; Ban, M.; Maeno, F.; Nishimoto, J.; Jien, X.; Taniguchi, H.

    2005-12-01

    Baitoushan volcano, China/North Korea, is one of the most active volcanoes in Northeastern Asia, and the 10th century eruption was the most voluminous eruption in the world in recent 2000 years. The sequence of the eruption reconstructed recently consists mainly of 6 units of deposits (Miyamoto et al., 2004); plinian airfall (unit B), large pyroclastic flow (unit C), plinian airfall with some intra- plinian pyroclastic flows (unit D), sub-plinian airfall (unit E), and large pyroclastic flow (unit F) with base surge (unit G) in ascending order. The magma erupted during steady eruption in earlier phase was comendite (unit B-C; Phase 1), whereas the magma during fluctuating eruptions in later phase is characterized by trachyte to trachyandesite with various amount of comendite (unit D-G; Phase 2). The wide variety in composition and occurrence of banded pumices strongly indicate mixing or mingling of the two magmas just prior to or during the eruption. The initial water contents had been determined for comendite by melt inclusion analyses (ca. 5.2 wt.%; Horn and Schmincke, 2000). Although the initial water content of the trachytic magma has not been correctly determined yet, the reported water contents of trachytic melt inclusions are lower (3-4 wt.%) than those of comenditic melt (Horn and Schmincke, 2000). We investigated juvenile materials of the eruption sequentially in terms of vesicularity, H2O content in matrix glass and textural characteristics. The vesicularity of pumices are generally high (>0.75) for all units. The residual water contents of the comenditic pumices during Phase 1 are relatively uniform (1.6 wt.%), whereas those of the trachytic scoria during Phase 2 and gray pumices during Phase 1 are low (ca. 0.7-1.3 wt.%). These facts may indicate that the difference in the initial water content, rather than the ascent mechanism of magma, controls the steadiness or fluctuation in eruption styles and the mass flux during the eruption.

  18. 77 FR 21773 - Filing Dates for The New Jersey Special Election in The 10th Congressional District

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-11

    ... Filing Dates for The New Jersey Special Election in The 10th Congressional District AGENCY: Federal Election Commission. ACTION: Notice of filing dates for special election. SUMMARY: New Jersey has scheduled... the New Jersey Special Primary and Special General Elections shall file a 12-day Pre-Primary Report...

  19. Proceedings of the International Conference on Mobile Learning 2014. (10th, Madrid, Spain, February 28-March 2, 2014)

    ERIC Educational Resources Information Center

    Sánchez, Inmaculada Arnedillo, Ed.; Isaías, Pedro, Ed.

    2014-01-01

    These proceedings contain the papers of the 10th International Conference on Mobile Learning 2014, which was organised by the International Association for Development of the Information Society, in Madrid, Spain, February 28-March 2, 2014. The Mobile Learning 2014 International Conference seeks to provide a forum for the presentation and…

  20. Graduate Students Lend Their Voices: Reflections on the 10th Seminar in Health and Environmental Education Research

    ERIC Educational Resources Information Center

    Russell, Joshua; White, Peta; Fook, Tanya Chung Tiam; Kayira, Jean; Muller, Susanne; Oakley, Jan

    2010-01-01

    Graduate students were invited by their faculty advisors to attend the 10th Seminar in Health and Environmental Education Research. Afterward, they were encouraged to comment on their experiences, involvement, and positioning. Two main authors developed survey questions and retrieved, analyzed, and synthesized the responses of four other graduate…

  1. Students' Transition Experience in the 10th Year of Schooling: Perceptions That Contribute to Improving the Quality of Schools

    ERIC Educational Resources Information Center

    Torres, Ana Cristina; Mouraz, Ana

    2015-01-01

    The study followed students in their 10th year of schooling that entered a new secondary education school in order to examine their perceptions of their previous schools' work and of its relationship with the difficulties they experience when in the transition. The analysis of 155 completed questionnaires of previous students of nine basic…

  2. 3 CFR 8938 - Proclamation 8938 of March 1, 2013. 10th Anniversary of the United States Department of Homeland...

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... of the United States Department of Homeland Security 8938 Proclamation 8938 Presidential Documents Proclamations Proclamation 8938 of March 1, 2013 Proc. 8938 10th Anniversary of the United States Department of Homeland SecurityBy the President of the United States of America A Proclamation Ten years ago, when...

  3. Risk Communication and Public Education in Edmonton, Alberta, Canada on the 10th Anniversary of the "Black Friday" Tornado

    ERIC Educational Resources Information Center

    Blanchard-Boehm, R. Denise; Cook, M. Jeffrey

    2004-01-01

    In July 1997, on the 10th anniversary of the great "Black Friday" Tornado, city officials of Edmonton, the print and broadcast media, agencies dealing in emergency management, and the national weather organisation recounted stories of the 1987, F5 tornado that struck Edmonton on a holiday weekend. The information campaign also presented…

  4. Selected Papers from the International Conference on College Teaching and Learning (10th, Jacksonville, Florida, April 1999).

    ERIC Educational Resources Information Center

    Chambers, Jack A., Ed.

    These 20 papers were selected from those presented at the 10th International Conference on College Teaching and Learning. Papers have the following titles and authors: (1) "Case It! A Project to Integrate Collaborative Case-Based Learning into International Undergraduate Biology Curricula" (Bergland, Klyczek, Lundeberg, Mogen, Johnson); (2) "The…

  5. Operationalizing the Rubric: The Effect of Benchmark Selection on the Assessed Quality of Writing.

    ERIC Educational Resources Information Center

    Popp, Sharon E. Osborn; Ryan, Joseph M.; Thompson, Marilyn S.; Behrens, John T.

    The purposes of this study were to investigate the role of benchmark writing samples in direct assessment of writing and to examine the consequences of differential benchmark selection with a common writing rubric. The influences of discourse and grade level were also examined within the context of differential benchmark selection. Raters scored…

  6. The Nature and Predictive Validity of a Benchmark Assessment Program in an American Indian School District

    ERIC Educational Resources Information Center

    Payne, Beverly J. R.

    2013-01-01

    This mixed methods study explored the nature of a benchmark assessment program and how well the benchmark assessments predicted End-of-Grade (EOG) and End-of-Course (EOC) test scores in an American Indian school district. Five major themes were identified and used to develop a Dimensions of Benchmark Assessment Program Effectiveness model:…

  7. Thermal Performance Benchmarking (Presentation)

    SciTech Connect

    Moreno, G.

    2014-11-01

    This project will benchmark the thermal characteristics of automotive power electronics and electric motor thermal management systems. Recent vehicle systems will be benchmarked to establish baseline metrics, evaluate advantages and disadvantages of different thermal management systems, and identify areas of improvement to advance the state-of-the-art.

  8. Benchmarks in Management Training.

    ERIC Educational Resources Information Center

    Paddock, Susan C.

    1997-01-01

    Data were collected from 12 states with Certified Public Manager training programs to establish benchmarks. The 38 benchmarks were in the following areas: program leadership, stability of administrative/financial support, consistent management philosophy, administrative control, participant selection/support, accessibility, application of…

  9. Verification and validation benchmarks.

    SciTech Connect

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-02-01

    Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of

  10. Fortified Settlements of the 9th and 10th Centuries ad in Central Europe: Structure, Function and Symbolism.

    PubMed

    Herold, Hajnalka

    2012-11-01

    THE STRUCTURE, FUNCTION(S) and symbolism of early medieval (9th-10th centuries ad) fortified settlements from central Europe, in particular today's Austria, Hungary, Czech Republic and Slovakia, are examined in this paper. It offers an overview of the current state of research together with new insights based on analysis of the site of Gars-Thunau in Lower Austria. Special emphasis is given to the position of the fortified sites in the landscape, to the elements of the built environment and their spatial organisation, as well as to graves within the fortified area. The region under study was situated on the SE border of the Carolingian (and later the Ottonian) Empire, with some of the discussed sites lying in the territory of the 'Great Moravian Empire' in the 9th and 10th centuries. These sites can therefore provide important comparative data for researchers working in other parts of the Carolingian Empire and neighbouring regions.

  11. Evaluating the Ability of Drama-Based Instruction To Influence the Socialization of Tenth Grade English Students Labeled as "Low Ability."

    ERIC Educational Resources Information Center

    Danielson, Trygve R.

    A study investigated the effect of drama-based instruction on the learning of social skills by student labeled as "low ability" in a 10th-grade required English class in a rural high school. Two separate classes of "low ability" 10th-grade English students in Janesville, Wisconsin, were presented with social skills training…

  12. Benchmarking expert system tools

    NASA Technical Reports Server (NTRS)

    Riley, Gary

    1988-01-01

    As part of its evaluation of new technologies, the Artificial Intelligence Section of the Mission Planning and Analysis Div. at NASA-Johnson has made timing tests of several expert system building tools. Among the production systems tested were Automated Reasoning Tool, several versions of OPS5, and CLIPS (C Language Integrated Production System), an expert system builder developed by the AI section. Also included in the test were a Zetalisp version of the benchmark along with four versions of the benchmark written in Knowledge Engineering Environment, an object oriented, frame based expert system tool. The benchmarks used for testing are studied.

  13. Toxicological Benchmarks for Wildlife

    SciTech Connect

    Sample, B.E. Opresko, D.M. Suter, G.W.

    1993-01-01

    Ecological risks of environmental contaminants are evaluated by using a two-tiered process. In the first tier, a screening assessment is performed where concentrations of contaminants in the environment are compared to no observed adverse effects level (NOAEL)-based toxicological benchmarks. These benchmarks represent concentrations of chemicals (i.e., concentrations presumed to be nonhazardous to the biota) in environmental media (water, sediment, soil, food, etc.). While exceedance of these benchmarks does not indicate any particular level or type of risk, concentrations below the benchmarks should not result in significant effects. In practice, when contaminant concentrations in food or water resources are less than these toxicological benchmarks, the contaminants may be excluded from further consideration. However, if the concentration of a contaminant exceeds a benchmark, that contaminant should be retained as a contaminant of potential concern (COPC) and investigated further. The second tier in ecological risk assessment, the baseline ecological risk assessment, may use toxicological benchmarks as part of a weight-of-evidence approach (Suter 1993). Under this approach, based toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. Other sources of evidence include media toxicity tests, surveys of biota (abundance and diversity), measures of contaminant body burdens, and biomarkers. This report presents NOAEL- and lowest observed adverse effects level (LOAEL)-based toxicological benchmarks for assessment of effects of 85 chemicals on 9 representative mammalian wildlife species (short-tailed shrew, little brown bat, meadow vole, white-footed mouse, cottontail rabbit, mink, red fox, and whitetail deer) or 11 avian wildlife species (American robin, rough-winged swallow, American woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, barn owl, Cooper's hawk, and red-tailed hawk

  14. Predicting Long-Term College Success through Degree Completion Using ACT[R] Composite Score, ACT Benchmarks, and High School Grade Point Average. ACT Research Report Series, 2012 (5)

    ERIC Educational Resources Information Center

    Radunzel, Justine; Noble, Julie

    2012-01-01

    This study compared the effectiveness of ACT[R] Composite score and high school grade point average (HSGPA) for predicting long-term college success. Outcomes included annual progress towards a degree (based on cumulative credit-bearing hours earned), degree completion, and cumulative grade point average (GPA) at 150% of normal time to degree…

  15. Proceedings of the Navy Symposium on Aeroballistics (10th) Held at the Sheraton Motor Inn, Fredericksburg, Virginia, on 15-16-17 July 1975. Volume 2

    DTIC Science & Technology

    1975-07-17

    Technical Report AFATL’.TR-73-l II, Air Force Armament Laboratoiv, May. 1973. 147 10th Navy Symposium on Aeroballistics Vol. 2 t 13. Miko , R. J., and...a’ the boiling temperature (or 322 10th Navy Symposium on Aeroballistics Ii!- Vol. 2 WD2503 FLOW CONTROL VALVE PISTON VALVE TADFLOWMETER PLUG WAND FL

  16. Diagnostic Algorithm Benchmarking

    NASA Technical Reports Server (NTRS)

    Poll, Scott

    2011-01-01

    A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.

  17. TWODANT benchmark. Progress report

    SciTech Connect

    Lee, Sung

    1994-01-11

    TWODANT (Two-Dimensional, Diffusion-Accelerated, Neutral-Particle Transport) code has been benchmarked against 6 critical experiments (Jezebel plutonium critical assembly) and their k effective values compared with those of KENO and MCNP codes.

  18. Benchmarking TENDL-2012

    NASA Astrophysics Data System (ADS)

    van der Marck, S. C.; Koning, A. J.; Rochman, D. A.

    2014-04-01

    The new release of the TENDL nuclear data library, TENDL-2012, was tested by performing many benchmark calculations. Close to 2000 criticality safety benchmark cases were used, as well as many benchmark shielding cases. All the runs could be compared with similar runs based on the nuclear data libraries ENDF/B-VII.1, JENDL-4.0, and JEFF-3.1.1 respectively. The results are that many of the criticality safety results obtained with TENDL-2012 are close to the ones for the other libraries. In particular the results for the thermal spectrum cases with LEU fuel are good. Nevertheless, there is a fair amount of cases for which the TENDL-2012 results are not as good as the other libraries. Especially a number of fast spectrum cases with reflectors are not well described. The results for the shielding benchmarks are mostly similar to the ones for the other libraries. Some isolated cases with differences are identified.

  19. Benchmarking in Foodservice Operations.

    DTIC Science & Technology

    2007-11-02

    51. Lingle JH, Schiemann WA. From balanced scorecard to strategic gauges: Is measurement worth it? Mgt Rev. 1996; 85(3):56-61. 52. Struebing L...studies lasted from nine to twelve months, and could extend beyond that time for numerous reasons (49). Benchmarking was not industrial tourism , a...not simply data comparison, a fad, a means for reducing resources, a quick-fix program, or industrial tourism . Benchmarking was a complete process

  20. Is the 10th and 11th Intercostal Space a Safe Approach for Percutaneous Nephrostomy and Nephrolithotomy?

    SciTech Connect

    Muzrakchi, Ahmed Al; Szmigielski, W. Omar, Ahmed J.S.; Younes, Nagy M.

    2004-09-15

    The aim of this study was to determine the rate of complications in percutaneous nephrostomy (PCN) and nephrolithotomy (PCNL) performed through the 11th and 10th intercostal spaces using our monitoring technique and to discuss the safety of the procedure. Out of 398 PCNs and PCNLs carried out during a 3-year period, 56 patients had 57 such procedures performed using an intercostal approach. The 11th intercostal route was used in 42 and the 10th in 15 cases. One patient had two separate nephrostomies performed through the 10th and 11th intercostal spaces. The technique utilizes bi-planar fluoroscopy with a combination of a conventional angiographic machine to provide anterior-posterior fluoroscopy and a C-arm mobile fluoroscopy machine to give a lateral view, displayed on two separate monitors. None of the patients had clinically significant thoracic or abdominal complications. Two patients had minor chest complications. Only one developed changes (plate atelectasis, elevation of the hemi-diaphragm) directly related to the nephrostomy (2%). The second patient had bilateral plate atelectasis and unilateral congestive lung changes after PCNL. These changes were not necessarily related to the procedure but rather to general anesthesia during nephrolithotomy. The authors consider PCN or PCNL through the intercostal approach a safe procedure with a negligible complication rate, provided that it is performed under bi-planar fluoroscopy, which allows determination of the skin entry point just below the level of pleural reflection and provides three-dimensional monitoring of advancement of the puncturing needle toward the target entry point.

  1. Analysis and test for space shuttle propellant dynamics (1/10th scale model test results). Volume 1: Technical discussion

    NASA Technical Reports Server (NTRS)

    Berry, R. L.; Tegart, J. R.; Demchak, L. J.

    1979-01-01

    Space shuttle propellant dynamics during ET/Orbiter separation in the RTLS (return to launch site) mission abort sequence were investigated in a test program conducted in the NASA KC-135 "Zero G" aircraft using a 1/10th-scale model of the ET LOX Tank. Low-g parabolas were flown from which thirty tests were selected for evaluation. Data on the nature of low-g propellant reorientation in the ET LOX tank, and measurements of the forces exerted on the tank by the moving propellent will provide a basis for correlation with an analytical model of the slosh phenomenon.

  2. Cold winters in Poland in the period from 10th century to the first decade of 21st century

    NASA Astrophysics Data System (ADS)

    Limanowka, D.; Cebulak, E.; Pyrc, R.

    2010-09-01

    Extreme weather phenomena together with their exceptional course and intensity have always been dangerous for people. In the historical documents such phenomena were marked as basic disasters. First notes about weather phenomena were made in Polish lands in the 10th century. Research included extremely cold and snowy winters which were described in historical documents as a extreme meteorological phenomena. Data from the period of instrumental measurements in the 20th century were studied in detail. The results were referred to last 500 years. The information obtained gives approximate image of extreme winters in the historical times in Polish lands. All available multi-proxy data were used

  3. PNNL Information Technology Benchmarking

    SciTech Connect

    DD Hostetler

    1999-09-08

    Benchmarking is a methodology for searching out industry best practices that lead to superior performance. It is exchanging information, not just with any organization, but with organizations known to be the best within PNNL, in industry, or in dissimilar industries with equivalent functions. It is used as a continuous improvement tool for business and technical processes, products, and services. Information technology--comprising all computer and electronic communication products and services--underpins the development and/or delivery of many PNNL products and services. This document describes the Pacific Northwest National Laboratory's (PNNL's) approach to information technology (IT) benchmarking. The purpose is to engage other organizations in the collaborative process of benchmarking in order to improve the value of IT services provided to customers. TM document's intended audience consists of other US Department of Energy (DOE) national laboratories and their IT staff. Although the individual participants must define the scope of collaborative benchmarking, an outline of IT service areas for possible benchmarking is described.

  4. Fortified Settlements of the 9th and 10th Centuries ad in Central Europe: Structure, Function and Symbolism

    PubMed Central

    Herold, Hajnalka

    2012-01-01

    THE STRUCTURE, FUNCTION(S) and symbolism of early medieval (9th–10th centuries ad) fortified settlements from central Europe, in particular today’s Austria, Hungary, Czech Republic and Slovakia, are examined in this paper. It offers an overview of the current state of research together with new insights based on analysis of the site of Gars-Thunau in Lower Austria. Special emphasis is given to the position of the fortified sites in the landscape, to the elements of the built environment and their spatial organisation, as well as to graves within the fortified area. The region under study was situated on the SE border of the Carolingian (and later the Ottonian) Empire, with some of the discussed sites lying in the territory of the ‘Great Moravian Empire’ in the 9th and 10th centuries. These sites can therefore provide important comparative data for researchers working in other parts of the Carolingian Empire and neighbouring regions. PMID:23564981

  5. Translational benchmark risk analysis

    PubMed Central

    Piegorsch, Walter W.

    2010-01-01

    Translational development – in the sense of translating a mature methodology from one area of application to another, evolving area – is discussed for the use of benchmark doses in quantitative risk assessment. Illustrations are presented with traditional applications of the benchmark paradigm in biology and toxicology, and also with risk endpoints that differ from traditional toxicological archetypes. It is seen that the benchmark approach can apply to a diverse spectrum of risk management settings. This suggests a promising future for this important risk-analytic tool. Extensions of the method to a wider variety of applications represent a significant opportunity for enhancing environmental, biomedical, industrial, and socio-economic risk assessments. PMID:20953283

  6. Stability and Change in Interests: A Longitudinal Study of Adolescents from Grades 8 through 12

    ERIC Educational Resources Information Center

    Tracey, Terence J. G.; Robbins, Steven B.; Hofsess, Christy D.

    2005-01-01

    The pattern of RIASEC interests and academic skills were assessed longitudinally from a large-scale national database at three time points: eight grade, 10th grade, and 12th grade. Validation and cross-validation samples of 1000 males and 1000 females in each set were used to test the pattern of these scores over time relative to mean changes,…

  7. Differential Effects on Student Demographic Groups of Using ACT® College Readiness Assessment Composite Score, Act Benchmarks, and High School Grade Point Average for Predicting Long-Term College Success through Degree Completion. ACT Research Report Series, 2013 (5)

    ERIC Educational Resources Information Center

    Radunzel, Justine; Noble, Julie

    2013-01-01

    In this study, we evaluated the differential effects on racial/ethnic, family income, and gender groups of using ACT® College Readiness Assessment Composite score and high school grade point average (HSGPA) for predicting long-term college success. Outcomes included annual progress towards a degree (based on cumulative credit-bearing hours…

  8. Mask Waves Benchmark

    DTIC Science & Technology

    2007-10-01

    24 . Measured frequency vs. set frequency for all data .............................................. 23 25. Benchmark Probe#1 wave amplitude variation...4 8 A- 24 . Wave amplitude by probe, blower speed, lip setting for 0.768 Hz on the short I b an k...frequency and wavemaker bank .................................... 24 B- 1. Coefficient of variation as percentage for all conditions for long bank and bridge

  9. Benchmarks: WICHE Region 2012

    ERIC Educational Resources Information Center

    Western Interstate Commission for Higher Education, 2013

    2013-01-01

    Benchmarks: WICHE Region 2012 presents information on the West's progress in improving access to, success in, and financing of higher education. The information is updated annually to monitor change over time and encourage its use as a tool for informed discussion in policy and education communities. To establish a general context for the…

  10. Python/Lua Benchmarks

    SciTech Connect

    Busby, L.

    2014-08-01

    This is an adaptation of the pre-existing Scimark benchmark code to a variety of Python and Lua implementations. It also measures performance of the Fparser expression parser and C and C++ code on a variety of simple scientific expressions.

  11. Monte Carlo Benchmark

    SciTech Connect

    2010-10-20

    The "Monte Carlo Benchmark" (MCB) is intended to model the computatiional performance of Monte Carlo algorithms on parallel architectures. It models the solution of a simple heuristic transport equation using a Monte Carlo technique. The MCB employs typical features of Monte Carlo algorithms such as particle creation, particle tracking, tallying particle information, and particle destruction. Particles are also traded among processors using MPI calls.

  12. Benchmarking the World's Best

    ERIC Educational Resources Information Center

    Tucker, Marc S.

    2012-01-01

    A century ago, the United States was a world leader in industrial benchmarking. However, after World War II, once no one could compete with the U.S., it became complacent. Many industrialized countries now have higher student achievement and more equitable and efficient education systems. A higher proportion of young people in their workforces…

  13. HPCS HPCchallenge Benchmark Suite

    DTIC Science & Technology

    2007-11-02

    measured HPCchallenge Benchmark performance on various HPC architectures — from Cray X1s to Beowulf clusters — in the presentation and paper...from Cray X1s to Beowulf clusters — using the updated results at http://icl.cs.utk.edu/hpcc/hpcc_results.cgi Even a small percentage of random

  14. The Interpretations and Applications of Boethius's Introduction to the Arithmetic II,1 at the End of the 10th Century

    NASA Astrophysics Data System (ADS)

    Otisk, Marek

    This paper deals with comments and glosses to the first chapter of the second book of Boethius's Introduction to Arithmetic from the last quarter of the 10th century. Those texts were written by Gerbert of Aurillac (Scholium ad Boethii Arithmeticam Institutionem l. II, c. 1), Abbo of Fleury (commentary on the Calculus by Victorius of Aquitaine, the so-called De numero, mensura et pondere), Notker of Liège (De superparticularibus) and by the anonymous author (De arithmetica Boetii). The main aim of this paper is to show that Boethius's statements about the converting numerical sequences to equality from this work could be interpreted minimally in two different ways. This paper discussed also the application of this topic in other liberal arts (like astronomy, music, grammar etc.) and in playing game called rithmomachia, the medieval philosophers' game.

  15. Optical and microphysical properties of mineral dust and biomass burning aerosol observed over Warsaw on 10th July 2013

    NASA Astrophysics Data System (ADS)

    Janicka, Lucja; Stachlewska, Iwona; Veselovskii, Igor; Baars, Holger

    2016-04-01

    Biomass burning aerosol originating from Canadian forest fires was widely observed over Europe in July 2013. Favorable weather conditions caused long-term westward flow of smoke from Canada to Western and Central Europe. During this period, PollyXT lidar of the University of Warsaw took wavelength dependent measurements in Warsaw. On July 10th short event of simultaneous advection of Canadian smoke and Saharan dust was observed at different altitudes over Warsaw. Different origination of both air masses was indicated by backward trajectories from HYSPLIT model. Lidar measurements performed with various wavelength (1064, 532, 355 nm), using also Raman and depolarization channels for VIS and UV allowed for distinguishing physical differences of this two types of aerosols. Optical properties acted as input for retrieval of microphysical properties. Comparisons of microphysical and optical properties of biomass burning aerosols and mineral dust observed will be presented.

  16. Principles for an ETL Benchmark

    NASA Astrophysics Data System (ADS)

    Wyatt, Len; Caufield, Brian; Pol, Daniel

    Conditions in the marketplace for ETL tools suggest that an industry standard benchmark is needed. The benchmark should provide useful data for comparing the performance of ETL systems, be based on a meaningful scenario, and be scalable over a wide range of data set sizes. This paper gives a general scoping of the proposed benchmark and outlines some key decision points. The Transaction Processing Performance Council (TPC) has formed a development subcommittee to define and produce such a benchmark.

  17. Sequoia Messaging Rate Benchmark

    SciTech Connect

    Friedley, Andrew

    2008-01-22

    The purpose of this benchmark is to measure the maximal message rate of a single compute node. The first num_cores ranks are expected to reside on the 'core' compute node for which message rate is being tested. After that, the next num_nbors ranks are neighbors for the first core rank, the next set of num_nbors ranks are neighbors for the second core rank, and so on. For example, testing an 8-core node (num_cores = 8) with 4 neighbors (num_nbors = 4) requires 8 + 8 * 4 - 40 ranks. The first 8 of those 40 ranks are expected to be on the 'core' node being benchmarked, while the rest of the ranks are on separate nodes.

  18. Benchmark Airport Charges

    NASA Technical Reports Server (NTRS)

    de Wit, A.; Cohn, N.

    1999-01-01

    The Netherlands Directorate General of Civil Aviation (DGCA) commissioned Hague Consulting Group (HCG) to complete a benchmark study of airport charges at twenty eight airports in Europe and around the world, based on 1996 charges. This study followed previous DGCA research on the topic but included more airports in much more detail. The main purpose of this new benchmark study was to provide insight into the levels and types of airport charges worldwide and into recent changes in airport charge policy and structure. This paper describes the 1996 analysis. It is intended that this work be repeated every year in order to follow developing trends and provide the most up-to-date information possible.

  19. Algebraic Multigrid Benchmark

    SciTech Connect

    2013-05-06

    AMG2013 is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. It has been derived directly from the Boomer AMG solver in the hypre library, a large linear solvers library that is being developed in the Center for Applied Scientific Computing (CASC) at LLNL. The driver provided in the benchmark can build various test problems. The default problem is a Laplace type problem on an unstructured domain with various jumps and an anisotropy in one part.

  20. MPI Multicore Linktest Benchmark

    SciTech Connect

    Schulz, Martin

    2008-01-25

    The MPI Multicore Linktest (LinkTest) measures the aggregate bandwidth from/to a multicore node in a parallel system. It allows the user to specify a variety of different node layout and communication routine variations and reports the maximal observed bandwidth across all specified options. In particular, this benchmark is able to vary the number of tasks on the root node and thereby allows users to study the impact of multicore architectures on MPI communication performance.

  1. The NAS Parallel Benchmarks

    SciTech Connect

    Bailey, David H.

    2009-11-15

    The NAS Parallel Benchmarks (NPB) are a suite of parallel computer performance benchmarks. They were originally developed at the NASA Ames Research Center in 1991 to assess high-end parallel supercomputers. Although they are no longer used as widely as they once were for comparing high-end system performance, they continue to be studied and analyzed a great deal in the high-performance computing community. The acronym 'NAS' originally stood for the Numerical Aeronautical Simulation Program at NASA Ames. The name of this organization was subsequently changed to the Numerical Aerospace Simulation Program, and more recently to the NASA Advanced Supercomputing Center, although the acronym remains 'NAS.' The developers of the original NPB suite were David H. Bailey, Eric Barszcz, John Barton, David Browning, Russell Carter, LeoDagum, Rod Fatoohi, Samuel Fineberg, Paul Frederickson, Thomas Lasinski, Rob Schreiber, Horst Simon, V. Venkatakrishnan and Sisira Weeratunga. The original NAS Parallel Benchmarks consisted of eight individual benchmark problems, each of which focused on some aspect of scientific computing. The principal focus was in computational aerophysics, although most of these benchmarks have much broader relevance, since in a much larger sense they are typical of many real-world scientific computing applications. The NPB suite grew out of the need for a more rational procedure to select new supercomputers for acquisition by NASA. The emergence of commercially available highly parallel computer systems in the late 1980s offered an attractive alternative to parallel vector supercomputers that had been the mainstay of high-end scientific computing. However, the introduction of highly parallel systems was accompanied by a regrettable level of hype, not only on the part of the commercial vendors but even, in some cases, by scientists using the systems. As a result, it was difficult to discern whether the new systems offered any fundamental performance advantage

  2. International Classification of Diseases 10th edition-based disability adjusted life years for measuring of burden of specific injury

    PubMed Central

    Kim, Yu Jin; Shin, Sang Do; Park, Hye Sook; Song, Kyoung Jun; Cho, Jin Sung; Lee, Seung Chul; Kim, Sung Chun; Park, Ju Ok; Ahn, Ki Ok; Park, Yu Mi

    2016-01-01

    Objective We aimed to develop an International Classification of Diseases (ICD) 10th edition injury code-based disability-adjusted life year (DALY) to measure the burden of specific injuries. Methods Three independent panels used novel methods to score disability weights (DWs) of 130 indicator codes sampled from 1,284 ICD injury codes. The DWs were interpolated into the remaining injury codes (n=1,154) to estimate DWs for all ICD injury codes. The reliability of the estimated DWs was evaluated using the test-retest method. We calculated ICD-DALYs for individual injury episodes using the DWs from the Korean National Hospital Discharge Injury Survey (HDIS, n=23,160 of 2004) database and compared them with DALY based on a global burden of disease study (GBD-DALY) regarding validation, correlation, and agreement for 32 injury categories. Results Using 130 ICD 10th edition injury indicator codes, three panels determined the DWs using the highest reliability (person trade-off 1, Spearman r=0.724, 0.788, and 0.875 for the three panel groups). The test-retest results for the reliability were excellent (Spearman r=0.932) (P<0.001). The HDIS database revealed injury burden (years) as follows: GBD-DALY (138,548), GBD-years of life disabled (130,481), and GBD-years of life lost (8,117) versus ICD-DALY (262,246), ICD-years of life disabled (255,710), and ICD-years of life lost (6,537), respectively. Spearman’s correlation coefficient of the DALYs between the two methods was 0.759 (P<0.001), and the Bland-Altman test displayed an acceptable agreement, with exception of two categories among 32 injury groups. Conclusion The ICD-DALY was developed to calculate the burden of injury for all injury codes and was validated with the GBD-DALY. The ICD-DALY was higher than the GBD-DALY but showed acceptable agreement. PMID:28168229

  3. The 10th anniversary of the Junior Members and Affiliates of the European Academy of Allergy and Clinical Immunology.

    PubMed

    Skevaki, Chrysanthi L; Maggina, Paraskevi; Santos, Alexandra F; Rodrigues-Alves, Rodrigo; Antolin-Amerigo, Dario; Borrego, Luis Miguel; Bretschneider, Isabell; Butiene, Indre; Couto, Mariana; Fassio, Filippo; Gardner, James; Xatzipsalti, Maria; Hovhannisyan, Lilit; Hox, Valerie; Makrinioti, Heidi; O Neil, Serena E; Pala, Gianni; Rudenko, Michael; Santucci, Annalisa; Seys, Sven; Sokolowska, Milena; Whitaker, Paul; Heffler, Enrico

    2011-12-01

    This year is the 10th anniversary of the European Academy of Allergy and Clinical Immunology (EAACI) Junior Members and Affiliates (JMAs). The aim of this review is to highlight the work and activities of EAACI JMAs. To this end, we have summarized all the initiatives taken by JMAs during the last 10 yr. EAACI JMAs are currently a group of over 2380 clinicians and scientists under the age of 35 yr, who support the continuous education of the Academy's younger members. For the past decade, JMAs enjoy a steadily increasing number of benefits such as free online access to the Academy's journals, the possibility to apply for Fellowships and the Mentorship Program, travel grants to attend scientific meetings, and many more. In addition, JMAs have been involved in task forces, cooperation schemes with other scientific bodies, organization of JMA focused sessions during EAACI meetings, and participation in the activities of EAACI communication platforms. EAACI JMA activities represent an ideal example of recruiting, training, and educating young scientists in order for them to thrive as future experts in their field. This model may serve as a prototype for other scientific communities, several of which have already adapted similar policies.

  4. XAFS study of copper and silver nanoparticles in glazes of medieval middle-east lustreware (10th-13th century)

    NASA Astrophysics Data System (ADS)

    Padovani, S.; Puzzovio, D.; Sada, C.; Mazzoldi, P.; Borgia, I.; Sgamellotti, A.; Brunetti, B. G.; Cartechini, L.; D'Acapito, F.; Maurizio, C.; Shokoui, F.; Oliaiy, P.; Rahighi, J.; Lamehi-Rachti, M.; Pantos, E.

    2006-06-01

    It has recently been shown that lustre decoration of medieval and Renaissance pottery consists of silver and copper nanoparticles dispersed in the glassy matrix of the ceramic glaze. Here the findings of an X-ray absorption fine structure (XAFS) study on lustred glazes of shards belonging to 10th and 13rd century pottery from the National Museum of Iran are reported. Absorption spectra in the visible range have been also measured in order to investigate the relations between colour and glaze composition. Gold colour is mainly due to Ag nanoparticles, though Ag+, Cu+ and Cu2+ ions can be also dispersed within the glassy matrix, with different ratios. Red colour is mainly due to Cu nanoparticles, although some Ag nanoparticles, Ag+ and Cu+ ions can be present. The achievement of metallic Cu and the absence of Cu2+ indicate a higher reduction of copper in red lustre. These findings are in substantial agreement with previous results on Italian Renaissance pottery. In spite of the large heterogeneity of cases, the presence of copper and silver ions in the glaze confirms that lustre formation is mediated by a copper- and silver-alkali ion exchange, followed by nucleation and growth of metal nanoparticles.

  5. [Thoracopagus symmetricus. On the separation of Siamese twins in the 10th century A. D. by Byzantine physicians].

    PubMed

    Geroulanos, S; Jaggi, F; Wydler, J; Lachat, M; Cakmakci, M

    1993-01-01

    The byzantine author, Leon Diakonos, mentions in 974/975 A.D. a pair of "siamese twins", e.g., a thoracopagus symmetricus. He had seen them personally several times in Asia Minor when they were about 30 years old. This pair is possibly the same that was "successfully" surgically separated after the death of one of the twins in the second half of the 10th century in Constantinople. This operation is mentioned by two historiographs, Leon Grammatikos and Theodoros Daphnopates. Although the second twin survived the operation, he died three days later. In spite of its lethal outcome, the operation left a long-lasting impression on the historians of that time and was even mentioned 150 years later by Johannes Skylitzes. Furthermore, the manuscript of Skylitzes, now in the library of Madrid, contains a miniature illuminating this operation. This is likely to be the earliest written report of a separation of siamese twins illustrating the high standard of byzantine medicine of that time.

  6. Genomic variation in a global village: report of the 10th annual Human Genome Variation Meeting 2008.

    PubMed

    Brookes, Anthony J; Chanock, Stephen J; Hudson, Thomas J; Peltonen, Leena; Abecasis, Gonçalo; Kwok, Pui-Yan; Scherer, Stephen W

    2009-07-01

    The Centre for Applied Genomics of the Hospital for Sick Children and the University of Toronto hosted the 10th Human Genome Variation (HGV) Meeting in Toronto, Canada, in October 2008, welcoming about 240 registrants from 34 countries. During the 3 days of plenary workshops, keynote address, and poster sessions, a strong cross-disciplinary trend was evident, integrating expertise from technology and computation, through biology and medicine, to ethics and law. Single nucleotide polymorphisms (SNPs) as well as the larger copy number variants (CNVs) are recognized by ever-improving array and next-generation sequencing technologies, and the data are being incorporated into studies that are increasingly genome-wide as well as global in scope. A greater challenge is to convert data to information, through databases, and to use the information for greater understanding of human variation. In the wake of publications of the first individual genome sequences, an inaugural public forum provided the opportunity to debate whether we are ready for personalized medicine through direct-to-consumer testing. The HGV meetings foster collaboration, and fruits of the interactions from 2008 are anticipated for the 11th annual meeting in September 2009.

  7. Genomic Variation in a Global Village: Report of the 10th Annual Human Genome Variation Meeting 2008

    PubMed Central

    Brookes, Anthony J.; Chanock, Stephen J.; Hudson, Thomas J.; Peltonen, Leena; Abecasis, Gonçalo; Kwok, Pui-Yan; Scherer, Stephen W.

    2013-01-01

    The Centre for Applied Genomics of the Hospital for Sick Children and the University of Toronto hosted the 10th Human Genome Variation (HGV) Meeting in Toronto, Canada, in October 2008, welcoming about 240 registrants from 34 countries. During the 3 days of plenary workshops, keynote address, and poster sessions, a strong cross-disciplinary trend was evident, integrating expertise from technology and computation, through biology and medicine, to ethics and law. Single nucleotide polymorphisms (SNPs) as well as the larger copy number variants (CNVs) are recognized by ever-improving array and next-generation sequencing technologies, and the data are being incorporated into studies that are increasingly genome-wide as well as global in scope. A greater challenge is to convert data to information, through databases, and to use the information for greater understanding of human variation. In the wake of publications of the first individual genome sequences, an inaugural public forum provided the opportunity to debate whether we are ready for personalized medicine through direct-to-consumer testing. The HGV meetings foster collaboration, and fruits of the interactions from 2008 are anticipated for the 11th annual meeting in September 2009. PMID:19384970

  8. Benchmarking: applications to transfusion medicine.

    PubMed

    Apelseth, Torunn Oveland; Molnar, Laura; Arnold, Emmy; Heddle, Nancy M

    2012-10-01

    Benchmarking is as a structured continuous collaborative process in which comparisons for selected indicators are used to identify factors that, when implemented, will improve transfusion practices. This study aimed to identify transfusion medicine studies reporting on benchmarking, summarize the benchmarking approaches used, and identify important considerations to move the concept of benchmarking forward in the field of transfusion medicine. A systematic review of published literature was performed to identify transfusion medicine-related studies that compared at least 2 separate institutions or regions with the intention of benchmarking focusing on 4 areas: blood utilization, safety, operational aspects, and blood donation. Forty-five studies were included: blood utilization (n = 35), safety (n = 5), operational aspects of transfusion medicine (n = 5), and blood donation (n = 0). Based on predefined criteria, 7 publications were classified as benchmarking, 2 as trending, and 36 as single-event studies. Three models of benchmarking are described: (1) a regional benchmarking program that collects and links relevant data from existing electronic sources, (2) a sentinel site model where data from a limited number of sites are collected, and (3) an institutional-initiated model where a site identifies indicators of interest and approaches other institutions. Benchmarking approaches are needed in the field of transfusion medicine. Major challenges include defining best practices and developing cost-effective methods of data collection. For those interested in initiating a benchmarking program, the sentinel site model may be most effective and sustainable as a starting point, although the regional model would be the ideal goal.

  9. The Effects of Game-Based Learning and Anticipation of a Test on the Learning Outcomes of 10th Grade Geology Students

    ERIC Educational Resources Information Center

    Chen, Chia-Li Debra; Yeh, Ting-Kuang; Chang, Chun-Yen

    2016-01-01

    This study examines whether a Role Play Game (RPG) with embedded geological contents and students' anticipation of an upcoming posttest significantly affect high school students' achievements of and attitudes toward geology. The participants of the study were comprised of 202 high school students, 103 males and 99 females. The students were…

  10. Criterion-Related Validity of Curriculum-Based Measurement in Writing with Narrative and Expository Prompts Relative to Passage Copying Speed in 10th Grade Students

    ERIC Educational Resources Information Center

    Mercer, Sterett H.; Martinez, Rebecca S.; Faust, Dennis; Mitchell, Rachel R.

    2012-01-01

    We investigated the criterion-related validity of four indicators of curriculum-based measurement in writing (WCBM) when using expository versus narrative writing prompts as compared to the validity of passage copying speed. Specifically, we compared criterion-related validity of production-dependent (total words written, correct word sequences),…

  11. Nebraska Vocational Agribusiness Curriculum for City Schools. Career Opportunities in Agribusiness. Basic Skill in Agribusiness. A Curriculum Guide. 10th Grade.

    ERIC Educational Resources Information Center

    Nebraska Univ., Lincoln. Dept. of Agricultural Education.

    Designed for use with high school sophomores, this agribusiness curriculum for city schools contains thirty-one units of instruction in the areas of career opportunities in agribusiness and vocational agribusiness skills. Among the units included are (1) Career Selection, (2) Parliamentary Procedure and Public Speaking, (3) Career Opportunities in…

  12. A Comparison Study of AVID and GEAR UP 10th-Grade Students in Two High Schools in the Rio Grande Valley of Texas

    ERIC Educational Resources Information Center

    Watt, Karen M.; Huerta, Jeffery; Lozano, Aliber

    2007-01-01

    This study examines 4 groups of high school students enrolled in 2 college preparatory programs, AVID and GEAR UP. Differences in student educational aspirations, expectations and anticipations, knowledge of college entrance requirements, knowledge of financial aid, and academic achievement in mathematics were examined. Adelman's (1999)…

  13. Carpenter, tractors and microbes for the development of logical-mathematical thinking - the way 10th graders and pre-service teachers solve thinking challenges

    NASA Astrophysics Data System (ADS)

    Gazit, Avikam

    2012-12-01

    The objective of this case study was to investigate the ability of 10th graders and pre-service teachers to solve logical-mathematical thinking challenges. The challenges do not require mathematical knowledge beyond that of primary school but rather an informed use of the problem representation. The percentage of correct answers given by the 10th graders was higher than that of the pre-service teachers. Unlike the 10th graders, some of whom used various strategies for representing the problem, most of the pre-service teachers' answers were based on a technical algorithm, without using control processes. The obvious conclusion drawn from the findings supports and recommends expanding and enhancing the development of logical-mathematical thinking, both in specific lessons and as an integral part of other lessons in pre-service frameworks.

  14. Self-benchmarking Guide for Data Centers: Metrics, Benchmarks, Actions

    SciTech Connect

    Mathew, Paul; Ganguly, Srirupa; Greenberg, Steve; Sartor, Dale

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in data centers. This guide is primarily intended for personnel who have responsibility for managing energy use in existing data centers - including facilities managers, energy managers, and their engineering consultants. Additionally, data center designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior data center benchmarking studies supported by the California Energy Commission. Much of the benchmarking data are drawn from the LBNL data center benchmarking database that was developed from these studies. Additional benchmark data were obtained from engineering experts including facility designers and energy managers. This guide also builds on recent research supported by the U.S. Department of Energy's Save Energy Now program.

  15. Core Benchmarks Descriptions

    SciTech Connect

    Pavlovichev, A.M.

    2001-05-24

    Actual regulations while designing of new fuel cycles for nuclear power installations comprise a calculational justification to be performed by certified computer codes. It guarantees that obtained calculational results will be within the limits of declared uncertainties that are indicated in a certificate issued by Gosatomnadzor of Russian Federation (GAN) and concerning a corresponding computer code. A formal justification of declared uncertainties is the comparison of calculational results obtained by a commercial code with the results of experiments or of calculational tests that are calculated with an uncertainty defined by certified precision codes of MCU type or of other one. The actual level of international cooperation provides an enlarging of the bank of experimental and calculational benchmarks acceptable for a certification of commercial codes that are being used for a design of fuel loadings with MOX fuel. In particular, the work is practically finished on the forming of calculational benchmarks list for a certification of code TVS-M as applied to MOX fuel assembly calculations. The results on these activities are presented.

  16. Benchmarking concentrating photovoltaic systems

    NASA Astrophysics Data System (ADS)

    Duerr, Fabian; Muthirayan, Buvaneshwari; Meuret, Youri; Thienpont, Hugo

    2010-08-01

    Integral to photovoltaics is the need to provide improved economic viability. To achieve this goal, photovoltaic technology has to be able to harness more light at less cost. A large variety of concentrating photovoltaic concepts has provided cause for pursuit. To obtain a detailed profitability analysis, a flexible evaluation is crucial for benchmarking the cost-performance of this variety of concentrating photovoltaic concepts. To save time and capital, a way to estimate the cost-performance of a complete solar energy system is to use computer aided modeling. In this work a benchmark tool is introduced based on a modular programming concept. The overall implementation is done in MATLAB whereas Advanced Systems Analysis Program (ASAP) is used for ray tracing calculations. This allows for a flexible and extendable structuring of all important modules, namely an advanced source modeling including time and local dependence, and an advanced optical system analysis of various optical designs to obtain an evaluation of the figure of merit. An important figure of merit: the energy yield for a given photovoltaic system at a geographical position over a specific period, can be calculated.

  17. What Do 2nd and 10th Graders Have in Common? Worms and Technology: Using Technology to Collaborate across Boundaries

    ERIC Educational Resources Information Center

    Culver, Patti; Culbert, Angie; McEntyre, Judy; Clifton, Patrick; Herring, Donna F.; Notar, Charles E.

    2009-01-01

    The article is about the collaboration between two classrooms that enabled a second grade class to participate in a high school biology class. Through the use of modern video conferencing equipment, Mrs. Culbert, with the help of the Dalton State College Educational Technology Training Center (ETTC), set up a live, two way video and audio feed of…

  18. 21st Century Curriculum: Does Auto-Grading Writing Actually Work?

    ERIC Educational Resources Information Center

    T.H.E. Journal, 2013

    2013-01-01

    The West Virginia Department of Education's auto grading initiative dates back to 2004--a time when school districts were making their first forays into automation. The Charleston based WVDE had instituted a statewide writing assessment in 1984 for students in fourth, seventh, and 10th grades and was looking to expand that program without having…

  19. Cleanroom energy benchmarking results

    SciTech Connect

    Tschudi, William; Xu, Tengfang

    2001-09-01

    A utility market transformation project studied energy use and identified energy efficiency opportunities in cleanroom HVAC design and operation for fourteen cleanrooms. This paper presents the results of this work and relevant observations. Cleanroom owners and operators know that cleanrooms are energy intensive but have little information to compare their cleanroom's performance over time, or to others. Direct comparison of energy performance by traditional means, such as watts/ft{sup 2}, is not a good indicator with the wide range of industrial processes and cleanliness levels occurring in cleanrooms. In this project, metrics allow direct comparison of the efficiency of HVAC systems and components. Energy and flow measurements were taken to determine actual HVAC system energy efficiency. The results confirm a wide variation in operating efficiency and they identify other non-energy operating problems. Improvement opportunities were identified at each of the benchmarked facilities. Analysis of the best performing systems and components is summarized, as are areas for additional investigation.

  20. Benchmarking monthly homogenization algorithms

    NASA Astrophysics Data System (ADS)

    Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.

    2011-08-01

    The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data

  1. Benchmarking foreign electronics technologies

    SciTech Connect

    Bostian, C.W.; Hodges, D.A.; Leachman, R.C.; Sheridan, T.B.; Tsang, W.T.; White, R.M.

    1994-12-01

    This report has been drafted in response to a request from the Japanese Technology Evaluation Center`s (JTEC) Panel on Benchmarking Select Technologies. Since April 1991, the Competitive Semiconductor Manufacturing (CSM) Program at the University of California at Berkeley has been engaged in a detailed study of quality, productivity, and competitiveness in semiconductor manufacturing worldwide. The program is a joint activity of the College of Engineering, the Haas School of Business, and the Berkeley Roundtable on the International Economy, under sponsorship of the Alfred P. Sloan Foundation, and with the cooperation of semiconductor producers from Asia, Europe and the United States. Professors David A. Hodges and Robert C. Leachman are the project`s Co-Directors. The present report for JTEC is primarily based on data and analysis drawn from that continuing program. The CSM program is being conducted by faculty, graduate students and research staff from UC Berkeley`s Schools of Engineering and Business, and Department of Economics. Many of the participating firms are represented on the program`s Industry Advisory Board. The Board played an important role in defining the research agenda. A pilot study was conducted in 1991 with the cooperation of three semiconductor plants. The research plan and survey documents were thereby refined. The main phase of the CSM benchmarking study began in mid-1992 and will continue at least through 1997. reports are presented on the manufacture of integrated circuits; data storage; wireless technology; human-machine interfaces; and optoelectronics. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.

  2. Creating Cultures of Peace: Pedagogical Thought and Practice. Selected Papers from the 10th Triennial World Conference (September 10-15, 2001, Madrid, Spain)

    ERIC Educational Resources Information Center

    Benton, Jean E., Ed.; Swami, Piyush, Ed.

    2007-01-01

    The 10th Triennial World Conference of the World Council for Curriculum and Instruction (WCCI) was held September 10-15, 2001 in Madrid, Spain. The theme of the conference was "Cultures of Peace." Thirty-four papers and presentations are divided into nine sections. Part I, Tributes to the Founders of WCCI, includes: (1) Tribute to Alice…

  3. Carpenter, Tractors and Microbes for the Development of Logical-Mathematical Thinking--The Way 10th Graders and Pre-Service Teachers Solve Thinking Challenges

    ERIC Educational Resources Information Center

    Gazit, Avikam

    2012-01-01

    The objective of this case study was to investigate the ability of 10th graders and pre-service teachers to solve logical-mathematical thinking challenges. The challenges do not require mathematical knowledge beyond that of primary school but rather an informed use of the problem representation. The percentage of correct answers given by the 10th…

  4. Advances in Classification Research. Volume 10. Proceedings of the ASIS SIG/CR Classification Research Workshop (10th, Washington, DC, November 1-5, 1999). ASIST Monograph Series.

    ERIC Educational Resources Information Center

    Albrechtsen, Hanne, Ed.; Mai, Jens-Erik, Ed.

    This volume is a compilation of the papers presented at the 10th ASIS (American Society for Information Science) workshop on classification research. Major themes include the social and cultural informatics of classification and coding systems, subject access and indexing theory, genre analysis and the agency of documents in the ordering of…

  5. Internal Benchmarking for Institutional Effectiveness

    ERIC Educational Resources Information Center

    Ronco, Sharron L.

    2012-01-01

    Internal benchmarking is an established practice in business and industry for identifying best in-house practices and disseminating the knowledge about those practices to other groups in the organization. Internal benchmarking can be done with structures, processes, outcomes, or even individuals. In colleges or universities with multicampuses or a…

  6. The NAS kernel benchmark program

    NASA Technical Reports Server (NTRS)

    Bailey, D. H.; Barton, J. T.

    1985-01-01

    A collection of benchmark test kernels that measure supercomputer performance has been developed for the use of the NAS (Numerical Aerodynamic Simulation) program at the NASA Ames Research Center. This benchmark program is described in detail and the specific ground rules are given for running the program as a performance test.

  7. Benchmarking: A Process for Improvement.

    ERIC Educational Resources Information Center

    Peischl, Thomas M.

    One problem with the outcome-based measures used in higher education is that they measure quantity but not quality. Benchmarking, or the use of some external standard of quality to measure tasks, processes, and outputs, is partially solving that difficulty. Benchmarking allows for the establishment of a systematic process to indicate if outputs…

  8. FireHose Streaming Benchmarks

    SciTech Connect

    Karl Anderson, Steve Plimpton

    2015-01-27

    The FireHose Streaming Benchmarks are a suite of stream-processing benchmarks defined to enable comparison of streaming software and hardware, both quantitatively vis-a-vis the rate at which they can process data, and qualitatively by judging the effort involved to implement and run the benchmarks. Each benchmark has two parts. The first is a generator which produces and outputs datums at a high rate in a specific format. The second is an analytic which reads the stream of datums and is required to perform a well-defined calculation on the collection of datums, typically to find anomalous datums that have been created in the stream by the generator. The FireHose suite provides code for the generators, sample code for the analytics (which users are free to re-implement in their own custom frameworks), and a precise definition of each benchmark calculation.

  9. Applications of Integral Benchmark Data

    SciTech Connect

    Giuseppe Palmiotti; Teruhiko Kugo; Fitz Trumble; Albert C. Kahler; Dale Lancaster

    2014-10-09

    The International Reactor Physics Experiment Evaluation Project (IRPhEP) and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) provide evaluated integral benchmark data that may be used for validation of reactor physics / nuclear criticality safety analytical methods and data, nuclear data testing, advanced modeling and simulation, and safety analysis licensing activities. The handbooks produced by these programs are used in over 30 countries. Five example applications are presented in this paper: (1) Use of IRPhEP Data in Uncertainty Analyses and Cross Section Adjustment, (2) Uncertainty Evaluation Methods for Reactor Core Design at JAEA Using Reactor Physics Experimental Data, (3) Application of Benchmarking Data to a Broad Range of Criticality Safety Problems, (4) Cross Section Data Testing with ICSBEP Benchmarks, and (5) Use of the International Handbook of Evaluated Reactor Physics Benchmark Experiments to Support the Power Industry.

  10. Grade Span.

    ERIC Educational Resources Information Center

    Renchler, Ron

    2000-01-01

    This issue reviews grade span, or grade configuration. Catherine Paglin and Jennifer Fager's "Grade Configuration: Who Goes Where?" provides an overview of issues and concerns related to grade spans and supplies profiles of eight Northwest schools with varying grade spans. David F. Wihry, Theodore Coladarci, and Curtis Meadow's…

  11. Benchmarking in academic pharmacy departments.

    PubMed

    Bosso, John A; Chisholm-Burns, Marie; Nappi, Jean; Gubbins, Paul O; Ross, Leigh Ann

    2010-10-11

    Benchmarking in academic pharmacy, and recommendations for the potential uses of benchmarking in academic pharmacy departments are discussed in this paper. Benchmarking is the process by which practices, procedures, and performance metrics are compared to an established standard or best practice. Many businesses and industries use benchmarking to compare processes and outcomes, and ultimately plan for improvement. Institutions of higher learning have embraced benchmarking practices to facilitate measuring the quality of their educational and research programs. Benchmarking is used internally as well to justify the allocation of institutional resources or to mediate among competing demands for additional program staff or space. Surveying all chairs of academic pharmacy departments to explore benchmarking issues such as department size and composition, as well as faculty teaching, scholarly, and service productivity, could provide valuable information. To date, attempts to gather this data have had limited success. We believe this information is potentially important, urge that efforts to gather it should be continued, and offer suggestions to achieve full participation.

  12. A Heterogeneous Medium Analytical Benchmark

    SciTech Connect

    Ganapol, B.D.

    1999-09-27

    A benchmark, called benchmark BLUE, has been developed for one-group neutral particle (neutron or photon) transport in a one-dimensional sub-critical heterogeneous plane parallel medium with surface illumination. General anisotropic scattering is accommodated through the Green's Function Method (GFM). Numerical Fourier transform inversion is used to generate the required Green's functions which are kernels to coupled integral equations that give the exiting angular fluxes. The interior scalar flux is then obtained through quadrature. A compound iterative procedure for quadrature order and slab surface source convergence provides highly accurate benchmark qualities (4- to 5- places of accuracy) results.

  13. Phase-covariant quantum benchmarks

    SciTech Connect

    Calsamiglia, J.; Aspachs, M.; Munoz-Tapia, R.; Bagan, E.

    2009-05-15

    We give a quantum benchmark for teleportation and quantum storage experiments suited for pure and mixed test states. The benchmark is based on the average fidelity over a family of phase-covariant states and certifies that an experiment cannot be emulated by a classical setup, i.e., by a measure-and-prepare scheme. We give an analytical solution for qubits, which shows important differences with standard state estimation approach, and compute the value of the benchmark for coherent and squeezed states, both pure and mixed.

  14. California commercial building energy benchmarking

    SciTech Connect

    Kinney, Satkartar; Piette, Mary Ann

    2003-07-01

    Building energy benchmarking is the comparison of whole-building energy use relative to a set of similar buildings. It provides a useful starting point for individual energy audits and for targeting buildings for energy-saving measures in multiple-site audits. Benchmarking is of interest and practical use to a number of groups. Energy service companies and performance contractors communicate energy savings potential with ''typical'' and ''best-practice'' benchmarks while control companies and utilities can provide direct tracking of energy use and combine data from multiple buildings. Benchmarking is also useful in the design stage of a new building or retrofit to determine if a design is relatively efficient. Energy managers and building owners have an ongoing interest in comparing energy performance to others. Large corporations, schools, and government agencies with numerous facilities also use benchmarking methods to compare their buildings to each other. The primary goal of Task 2.1.1 Web-based Benchmarking was the development of a web-based benchmarking tool, dubbed Cal-Arch, for benchmarking energy use in California commercial buildings. While there were several other benchmarking tools available to California consumers prior to the development of Cal-Arch, there were none that were based solely on California data. Most available benchmarking information, including the Energy Star performance rating, were developed using DOE's Commercial Building Energy Consumption Survey (CBECS), which does not provide state-level data. Each database and tool has advantages as well as limitations, such as the number of buildings and the coverage by type, climate regions and end uses. There is considerable commercial interest in benchmarking because it provides an inexpensive method of screening buildings for tune-ups and retrofits. However, private companies who collect and manage consumption data are concerned that the identities of building owners might be revealed and

  15. Data-Intensive Benchmarking Suite

    SciTech Connect

    2008-11-26

    The Data-Intensive Benchmark Suite is a set of programs written for the study of data-or storage-intensive science and engineering problems, The benchmark sets cover: general graph searching (basic and Hadoop Map/Reduce breadth-first search), genome sequence searching, HTTP request classification (basic and Hadoop Map/Reduce), low-level data communication, and storage device micro-beachmarking

  16. Benchmarking hypercube hardware and software

    NASA Technical Reports Server (NTRS)

    Grunwald, Dirk C.; Reed, Daniel A.

    1986-01-01

    It was long a truism in computer systems design that balanced systems achieve the best performance. Message passing parallel processors are no different. To quantify the balance of a hypercube design, an experimental methodology was developed and the associated suite of benchmarks was applied to several existing hypercubes. The benchmark suite includes tests of both processor speed in the absence of internode communication and message transmission speed as a function of communication patterns.

  17. Report of the 10(th) Asia-Pacific Federation of Societies for Surgery of the Hand Congress (Organising Chair and Scientific Chair).

    PubMed

    A, Roohi Sharifah; Abdullah, Shalimar

    2016-10-01

    A report on the 10(th) Asia-Pacific Federation of Societies for the Surgery of the Hand and 6(th) Asia-Pacific Federation of Societies for Hand Therapists is submitted detailing the numbers of attendees participating, papers presented and support received as well the some of the challenges faced and how best to overcome them from the local conference chair and scientific chair point of view.

  18. Injuries and Physical Fitness Before and After Deployments of the 10th Mountain Division to Afghanistan and the 1st Cavalry Division to Iraq, September 2005 - October 2008

    DTIC Science & Technology

    2008-10-01

    determined using the McNemar Test. The McNemar Test allows comparison of frequency data involving repeated measures on the same individuals.(71) (3...and After Deployment of the 10thMt Cohort (n=505 Men) Injury Index Injury Incidence p-value ( McNemar Test) Predeployment Postdeployment Period 1...Injury Incidence Before and After Deployment of the 1stCav Cohort – Men (n=3242) Injury Index Injury Incidence p-value ( McNemar Test) Predeployment

  19. Benchmarks for GADRAS performance validation.

    SciTech Connect

    Mattingly, John K.; Mitchell, Dean James; Rhykerd, Charles L., Jr.

    2009-09-01

    The performance of the Gamma Detector Response and Analysis Software (GADRAS) was validated by comparing GADRAS model results to experimental measurements for a series of benchmark sources. Sources for the benchmark include a plutonium metal sphere, bare and shielded in polyethylene, plutonium oxide in cans, a highly enriched uranium sphere, bare and shielded in polyethylene, a depleted uranium shell and spheres, and a natural uranium sphere. The benchmark experimental data were previously acquired and consist of careful collection of background and calibration source spectra along with the source spectra. The calibration data were fit with GADRAS to determine response functions for the detector in each experiment. A one-dimensional model (pie chart) was constructed for each source based on the dimensions of the benchmark source. The GADRAS code made a forward calculation from each model to predict the radiation spectrum for the detector used in the benchmark experiment. The comparisons between the GADRAS calculation and the experimental measurements are excellent, validating that GADRAS can correctly predict the radiation spectra for these well-defined benchmark sources.

  20. Research on computer systems benchmarking

    NASA Technical Reports Server (NTRS)

    Smith, Alan Jay (Principal Investigator)

    1996-01-01

    This grant addresses the topic of research on computer systems benchmarking and is more generally concerned with performance issues in computer systems. This report reviews work in those areas during the period of NASA support under this grant. The bulk of the work performed concerned benchmarking and analysis of CPUs, compilers, caches, and benchmark programs. The first part of this work concerned the issue of benchmark performance prediction. A new approach to benchmarking and machine characterization was reported, using a machine characterizer that measures the performance of a given system in terms of a Fortran abstract machine. Another report focused on analyzing compiler performance. The performance impact of optimization in the context of our methodology for CPU performance characterization was based on the abstract machine model. Benchmark programs are analyzed in another paper. A machine-independent model of program execution was developed to characterize both machine performance and program execution. By merging these machine and program characterizations, execution time can be estimated for arbitrary machine/program combinations. The work was continued into the domain of parallel and vector machines, including the issue of caches in vector processors and multiprocessors. All of the afore-mentioned accomplishments are more specifically summarized in this report, as well as those smaller in magnitude supported by this grant.

  1. How to avoid 'death by benchmarking'.

    PubMed

    Wofford, Dave; Libby, Darin

    2015-08-01

    Hospitals and health systems should adopt four key principles and practices when applying benchmarks to determine physician compensation: Acknowledge that a lower percentile may be appropriate. Use the median as the all-in benchmark. Use peer benchmarks when available. Use alternative benchmarks.

  2. Benchmarking for the Learning and Skills Sector.

    ERIC Educational Resources Information Center

    Owen, Jane

    This document is designed to introduce practitioners in the United Kingdom's learning and skills sector to the principles and practice of benchmarking. The first section defines benchmarking and differentiates metric, diagnostic, and process benchmarking. The remainder of the booklet details the following steps of the benchmarking process: (1) get…

  3. Benchmark simulation models, quo vadis?

    PubMed

    Jeppsson, U; Alex, J; Batstone, D J; Benedetti, L; Comas, J; Copp, J B; Corominas, L; Flores-Alsina, X; Gernaey, K V; Nopens, I; Pons, M-N; Rodríguez-Roda, I; Rosen, C; Steyer, J-P; Vanrolleghem, P A; Volcke, E I P; Vrecko, D

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to provide their insights, highlighting areas where knowledge may still be deficient and where new opportunities are emerging, and to propose potential avenues for future development and application of the general benchmarking framework and its associated tools. The paper focuses on the topics of temporal and spatial extension, process modifications within the WWTP, the realism of models, control strategy extensions and the potential for new evaluation tools within the existing benchmark system. We find that there are major opportunities for application within all of these areas, either from existing work already being done within the context of the benchmarking simulation models (BSMs) or applicable work in the wider literature. Of key importance is increasing capability, usability and transparency of the BSM package while avoiding unnecessary complexity.

  4. Closed-Loop Neuromorphic Benchmarks

    PubMed Central

    Stewart, Terrence C.; DeWolf, Travis; Kleinhans, Ashley; Eliasmith, Chris

    2015-01-01

    Evaluating the effectiveness and performance of neuromorphic hardware is difficult. It is even more difficult when the task of interest is a closed-loop task; that is, a task where the output from the neuromorphic hardware affects some environment, which then in turn affects the hardware's future input. However, closed-loop situations are one of the primary potential uses of neuromorphic hardware. To address this, we present a methodology for generating closed-loop benchmarks that makes use of a hybrid of real physical embodiment and a type of “minimal” simulation. Minimal simulation has been shown to lead to robust real-world performance, while still maintaining the practical advantages of simulation, such as making it easy for the same benchmark to be used by many researchers. This method is flexible enough to allow researchers to explicitly modify the benchmarks to identify specific task domains where particular hardware excels. To demonstrate the method, we present a set of novel benchmarks that focus on motor control for an arbitrary system with unknown external forces. Using these benchmarks, we show that an error-driven learning rule can consistently improve motor control performance across a randomly generated family of closed-loop simulations, even when there are up to 15 interacting joints to be controlled. PMID:26696820

  5. FLOWTRAN-TF code benchmarking

    SciTech Connect

    Flach, G.P.

    1990-12-01

    FLOWTRAN-TF is a two-component (air-water), two-phase thermal-hydraulics code designed for performing accident analyses of SRS reactor fuel assemblies during the Emergency Cooling System (ECS) phase of a Double Ended Guillotine Break (DEGB) Loss Of Coolant Accident (LOCA). A description of the code is given by Flach et al. (1990). This report provides benchmarking results for the version of FLOWTRAN-TF used to compute the Recommended K-Reactor Restart ECS Power Limit (Smith et al., 1990a; 1990b). Individual constitutive relations are benchmarked in Sections 2 through 5 while in Sections 6 and 7 integral code benchmarking results are presented. An overall assessment of FLOWTRAN-TF for its intended use in computing the ECS power limit completes the document.

  6. Radiation Detection Computational Benchmark Scenarios

    SciTech Connect

    Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.; McDonald, Ben S.

    2013-09-24

    Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing different techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to PNNL for

  7. National healthcare capital project benchmarking--an owner's perspective.

    PubMed

    Kahn, Noah

    2009-01-01

    Few sectors of the economy have been left unscathed in these economic times. Healthcare construction has been less affected than residential and nonresidential construction sectors, but driven by re-evaluation of healthcare system capital plans, projects are now being put on hold or canceled. The industry is searching for ways to improve the value proposition for project delivery and process controls. In other industries, benchmarking component costs has led to significant, sustainable reductions in costs and cost variations. Kaiser Permanente and the Construction Industry Institute (CII), a research component of the University of Texas at Austin, an industry leader in benchmarking, have joined with several other organizations to work on a national benchmarking and metrics program to gauge the performance of healthcare facility projects. This initiative will capture cost, schedule, delivery method, change, functional, operational, and best practice metrics. This program is the only one of its kind. The CII Web-based interactive reporting system enables a company to view its information and mine industry data. Benchmarking is a tool for continuous improvement that is capable not only of grading outcomes; it can inform all aspects of the healthcare design and construction process and ultimately help moderate the increasing cost of delivering healthcare.

  8. PRISMATIC CORE COUPLED TRANSIENT BENCHMARK

    SciTech Connect

    J. Ortensi; M.A. Pope; G. Strydom; R.S. Sen; M.D. DeHart; H.D. Gougar; C. Ellis; A. Baxter; V. Seker; T.J. Downar; K. Vierow; K. Ivanov

    2011-06-01

    The Prismatic Modular Reactor (PMR) is one of the High Temperature Reactor (HTR) design concepts that have existed for some time. Several prismatic units have operated in the world (DRAGON, Fort St. Vrain, Peach Bottom) and one unit is still in operation (HTTR). The deterministic neutronics and thermal-fluids transient analysis tools and methods currently available for the design and analysis of PMRs have lagged behind the state of the art compared to LWR reactor technologies. This has motivated the development of more accurate and efficient tools for the design and safety evaluations of the PMR. In addition to the work invested in new methods, it is essential to develop appropriate benchmarks to verify and validate the new methods in computer codes. The purpose of this benchmark is to establish a well-defined problem, based on a common given set of data, to compare methods and tools in core simulation and thermal hydraulics analysis with a specific focus on transient events. The benchmark-working group is currently seeking OECD/NEA sponsorship. This benchmark is being pursued and is heavily based on the success of the PBMR-400 exercise.

  9. Engine Benchmarking - Final CRADA Report

    SciTech Connect

    Wallner, Thomas

    2016-01-01

    Detailed benchmarking of the powertrains of three light-duty vehicles was performed. Results were presented and provided to CRADA partners. The vehicles included a MY2011 Audi A4, a MY2012 Mini Cooper and a MY2014 Nissan Versa.

  10. A comparison of five benchmarks

    NASA Technical Reports Server (NTRS)

    Huss, Janice E.; Pennline, James A.

    1987-01-01

    Five benchmark programs were obtained and run on the NASA Lewis CRAY X-MP/24. A comparison was made between the programs codes and between the methods for calculating performance figures. Several multitasking jobs were run to gain experience in how parallel performance is measured.

  11. Benchmark Lisp And Ada Programs

    NASA Technical Reports Server (NTRS)

    Davis, Gloria; Galant, David; Lim, Raymond; Stutz, John; Gibson, J.; Raghavan, B.; Cheesema, P.; Taylor, W.

    1992-01-01

    Suite of nonparallel benchmark programs, ELAPSE, designed for three tests: comparing efficiency of computer processing via Lisp vs. Ada; comparing efficiencies of several computers processing via Lisp; or comparing several computers processing via Ada. Tests efficiency which computer executes routines in each language. Available for computer equipped with validated Ada compiler and/or Common Lisp system.

  12. Processor Emulator with Benchmark Applications

    SciTech Connect

    Lloyd, G. Scott; Pearce, Roger; Gokhale, Maya

    2015-11-13

    A processor emulator and a suite of benchmark applications have been developed to assist in characterizing the performance of data-centric workloads on current and future computer architectures. Some of the applications have been collected from other open source projects. For more details on the emulator and an example of its usage, see reference [1].

  13. Austin Community College Benchmarking Update.

    ERIC Educational Resources Information Center

    Austin Community Coll., TX. Office of Institutional Effectiveness.

    Austin Community College contracted with MGT of America, Inc. in spring 1999 to develop a peer and benchmark (best) practices analysis on key indicators. These indicators were updated in spring 2002 using data from eight Texas community colleges and four non-Texas institutions that represent large, comprehensive, urban community colleges, similar…

  14. Iraq: Politics, Elections, and Benchmarks

    DTIC Science & Technology

    2009-12-08

    Politics, Elections, and Benchmarks Congressional Research Service 2 Kirkuk ( Tamim province) will join the Kurdish region (Article 140); designation...security control over areas inhabited by Kurds, and the Kurds’ claim that the province of Tamim (Kirkuk) be formally integrated into the KRG. These

  15. Iraq: Politics, Elections, and Benchmarks

    DTIC Science & Technology

    2010-01-15

    referendum on whether . Iraq: Politics, Elections, and Benchmarks Congressional Research Service 2 Kirkuk ( Tamim province) would join the Kurdish...areas inhabited by Kurds, and the Kurds’ claim that the province of Tamim (Kirkuk) be formally integrated into the KRG. These disputes were aggravated

  16. Iraq: Politics, Elections, and Benchmarks

    DTIC Science & Technology

    2010-04-28

    Politics, Elections, and Benchmarks Congressional Research Service 2 Kirkuk ( Tamim province) would join the Kurdish region (Article 140...18 Maliki: 8; INA: 9; Iraqiyya: 1 Sulaymaniyah 17 Kurdistan Alliance: 8; other Kurds: 9 Kirkuk ( Tamim ) 12 Iraqiyya: 6; Kurdistan Alliance: 6

  17. Iraq: Politics, Elections, and Benchmarks

    DTIC Science & Technology

    2009-10-21

    Benchmarks Congressional Research Service 2 Kirkuk ( Tamim province) will join the Kurdish region (Article 140); designation of Islam as “a main source” of...security control over areas inhabited by Kurds, and the Kurds’ claim that the province of Tamim (Kirkuk) be formally integrated into the KRG. These

  18. Benchmark 3 - Incremental sheet forming

    NASA Astrophysics Data System (ADS)

    Elford, Michael; Saha, Pradip; Seong, Daeyong; Haque, MD Ziaul; Yoon, Jeong Whan

    2013-12-01

    Benchmark-3 is designed to predict strains, punch load and deformed profile after spring-back during single tool incremental sheet forming. AA 7075-O material has been selected. A corn shape is formed to 45 mm depth with an angle of 45°. Problem description, material properties, and simulation reports with experimental data are summarized.

  19. [Congresses of the Croatian Medical Association regarding unpublished proceedings of the 10th congress in Zadar on September 25-28, 1996].

    PubMed

    Drazancić, Ante

    2011-01-01

    The first annual meeting of Croatian physicians, with characteristics of a congress, was held in 1899 at the 25th anniversary of the Croatian Medical Association. From 1954 to 1996, during almost 60 years of existence of the Croatian Medical Association, ten congresses of the Association were held. The congresses were during the development of modern medicine devoted to different medical questions, including some problems of national pathology, of the structure and restructuring of health care. The work and the content of congresses were published in the proceedings except for the 8th Congress in 1987 and the 10th in 1996. By reading main lectures, invited lectures and free papers the knowledge of that period can be gained. Many papers are even today actual, even today it could be learned from them. With more details, using published proceedings the 9th congress and the 10th congress are described on the basis preserved program, of a brief report in home journal and ample preserved correspondence. The national medical congres dedicated to technology advancement and to numerous problems of national pathology may be actual even today. They could help to solve many problems of health care, contribute to its improvement and convey consensus on its further development.

  20. Benchmarking clinical photography services in the NHS.

    PubMed

    Arbon, Giles

    2015-01-01

    Benchmarking is used in services across the National Health Service (NHS) using various benchmarking programs. Clinical photography services do not have a program in place and services have to rely on ad hoc surveys of other services. A trial benchmarking exercise was undertaken with 13 services in NHS Trusts. This highlights valuable data and comparisons that can be used to benchmark and improve services throughout the profession.

  1. How Benchmarking and Higher Education Came Together

    ERIC Educational Resources Information Center

    Levy, Gary D.; Ronco, Sharron L.

    2012-01-01

    This chapter introduces the concept of benchmarking and how higher education institutions began to use benchmarking for a variety of purposes. Here, benchmarking is defined as a strategic and structured approach whereby an organization compares aspects of its processes and/or outcomes to those of another organization or set of organizations to…

  2. Testing (Validating?) Cross Sections with ICSBEP Benchmarks

    SciTech Connect

    Kahler, Albert C. III

    2012-06-28

    We discuss how to use critical benchmarks from the International Handbook of Evaluated Criticality Safety Benchmark Experiments to determine the applicability of specific cross sections to the end-user's problem of interest. Particular attention is paid to making sure the selected suite of benchmarks includes the user's range of applicability (ROA).

  3. Benchmarking neuromorphic systems with Nengo

    PubMed Central

    Bekolay, Trevor; Stewart, Terrence C.; Eliasmith, Chris

    2015-01-01

    Nengo is a software package for designing and simulating large-scale neural models. Nengo is architected such that the same Nengo model can be simulated on any of several Nengo backends with few to no modifications. Backends translate a model to specific platforms, which include GPUs and neuromorphic hardware. Nengo also contains a large test suite that can be run with any backend and focuses primarily on functional performance. We propose that Nengo's large test suite can be used to benchmark neuromorphic hardware's functional performance and simulation speed in an efficient, unbiased, and future-proof manner. We implement four benchmark models and show that Nengo can collect metrics across five different backends that identify situations in which some backends perform more accurately or quickly. PMID:26539076

  4. New NAS Parallel Benchmarks Results

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; Saphir, William; VanderWijngaart, Rob; Woo, Alex; Kutler, Paul (Technical Monitor)

    1997-01-01

    NPB2 (NAS (NASA Advanced Supercomputing) Parallel Benchmarks 2) is an implementation, based on Fortran and the MPI (message passing interface) message passing standard, of the original NAS Parallel Benchmark specifications. NPB2 programs are run with little or no tuning, in contrast to NPB vendor implementations, which are highly optimized for specific architectures. NPB2 results complement, rather than replace, NPB results. Because they have not been optimized by vendors, NPB2 implementations approximate the performance a typical user can expect for a portable parallel program on distributed memory parallel computers. Together these results provide an insightful comparison of the real-world performance of high-performance computers. New NPB2 features: New implementation (CG), new workstation class problem sizes, new serial sample versions, more performance statistics.

  5. Geothermal Heat Pump Benchmarking Report

    SciTech Connect

    1997-01-17

    A benchmarking study was conducted on behalf of the Department of Energy to determine the critical factors in successful utility geothermal heat pump programs. A Successful program is one that has achieved significant market penetration. Successfully marketing geothermal heat pumps has presented some major challenges to the utility industry. However, select utilities have developed programs that generate significant GHP sales. This benchmarking study concludes that there are three factors critical to the success of utility GHP marking programs: (1) Top management marketing commitment; (2) An understanding of the fundamentals of marketing and business development; and (3) An aggressive competitive posture. To generate significant GHP sales, competitive market forces must by used. However, because utilities have functioned only in a regulated arena, these companies and their leaders are unschooled in competitive business practices. Therefore, a lack of experience coupled with an intrinsically non-competitive culture yields an industry environment that impedes the generation of significant GHP sales in many, but not all, utilities.

  6. Incorporating Ninth-Grade PSAT/NMSQT® Scores into AP Potential™ Predictions for AP® European History and AP World History. Statistical Report 2014-1

    ERIC Educational Resources Information Center

    Zhang, Xiuyuan; Patel, Priyank; Ewing, Maureen

    2015-01-01

    Historically, AP Potential™ correlations and expectancy tables have been based on 10th-and 11th-grade PSAT/NMSQT® examinees and 11th-and 12th-grade AP® examinees for all subjects (Zhang, Patel, & Ewing,2014; Ewing, Camara, & Millsap, 2006; Camara & Millsap, 1998). However, a large number of students take AP European History and AP…

  7. RASSP Benchmark 4 Technical Description.

    DTIC Science & Technology

    1998-01-09

    of both application and VHDL code . 5.3.4.1 Lines of Code . The lines of code for each application and VHDL source file shall be reported. This...Developer shall provide source files for the VHDL files used in defining the Virtual Prototype as well as in programming the FPGAs . Benchmark-4...programmable devices running application code writ- ten in a high-level source language such as C, except that more detailed models may be required to

  8. MPI Multicore Torus Communication Benchmark

    SciTech Connect

    Schulz, M.

    2008-02-05

    The MPI Multicore Torus Communications Benchmark (TorusTest) measues the aggegate bandwidth across all six links from/to any multicore node in a logical torus. It can run in wo modi: using a static or a random mapping of tasks to torus locations. The former can be used to achieve optimal mappings and aggregate bandwidths that can be achieved with varying node mappings.

  9. Restaurant Energy Use Benchmarking Guideline

    SciTech Connect

    Hedrick, R.; Smith, V.; Field, K.

    2011-07-01

    A significant operational challenge for food service operators is defining energy use benchmark metrics to compare against the performance of individual stores. Without metrics, multiunit operators and managers have difficulty identifying which stores in their portfolios require extra attention to bring their energy performance in line with expectations. This report presents a method whereby multiunit operators may use their own utility data to create suitable metrics for evaluating their operations.

  10. Thermal Performance Benchmarking: Annual Report

    SciTech Connect

    Moreno, Gilbert

    2016-04-08

    The goal for this project is to thoroughly characterize the performance of state-of-the-art (SOA) automotive power electronics and electric motor thermal management systems. Information obtained from these studies will be used to: Evaluate advantages and disadvantages of different thermal management strategies; establish baseline metrics for the thermal management systems; identify methods of improvement to advance the SOA; increase the publicly available information related to automotive traction-drive thermal management systems; help guide future electric drive technologies (EDT) research and development (R&D) efforts. The performance results combined with component efficiency and heat generation information obtained by Oak Ridge National Laboratory (ORNL) may then be used to determine the operating temperatures for the EDT components under drive-cycle conditions. In FY15, the 2012 Nissan LEAF power electronics and electric motor thermal management systems were benchmarked. Testing of the 2014 Honda Accord Hybrid power electronics thermal management system started in FY15; however, due to time constraints it was not possible to include results for this system in this report. The focus of this project is to benchmark the thermal aspects of the systems. ORNL's benchmarking of electric and hybrid electric vehicle technology reports provide detailed descriptions of the electrical and packaging aspects of these automotive systems.

  11. Perceptions of High Achieving African American/Black 10th Graders from a Low Socioeconomic Community Regarding Health Scientists and Desired Careers

    PubMed Central

    Boekeloo, Bradley; Randolph, Suzanne; Timmons-Brown, Stephanie; Wang, Min Qi

    2014-01-01

    Measures are needed to assess youth perceptions about health science careers to facilitate research aimed at facilitating youth pursuit of health science. Although the Indiana Instrument provides an established measure of perceptions regarding nursing and ideal careers, we were interested in learning how high achieving 10th graders from relatively low socioeconomic areas who identify as Black/African American (Black) perceive health science and ideal careers. The Indiana Instrument was modified, administered to 90 youth of interest, and psychometrically analyzed. Reliable subscales were identified that may facilitate parsimonious, theoretical, and reliable study of youth decision-making regarding health science careers. Such research may help to develop and evaluate strategies for increasing the number of minority health scientists. PMID:25194058

  12. Potential use of biomarkers in acute kidney injury: report and summary of recommendations from the 10th Acute Dialysis Quality Initiative consensus conference.

    PubMed

    Murray, Patrick T; Mehta, Ravindra L; Shaw, Andrew; Ronco, Claudio; Endre, Zoltan; Kellum, John A; Chawla, Lakhmir S; Cruz, Dinna; Ince, Can; Okusa, Mark D

    2014-03-01

    Over the last decade there has been considerable progress in the discovery and development of biomarkers of kidney disease, and several have now been evaluated in different clinical settings. Although there is a growing literature on the performance of various biomarkers in clinical studies, there is limited information on how these biomarkers would be utilized by clinicians to manage patients with acute kidney injury (AKI). Recognizing this gap in knowledge, we convened the 10th Acute Dialysis Quality Initiative meeting to review the literature on biomarkers in AKI and their application in clinical practice. We asked an international group of experts to assess four broad areas for biomarker utilization for AKI: risk assessment, diagnosis, and staging; differential diagnosis; prognosis and management; and novel physiological techniques including imaging. This article provides a summary of the key findings and recommendations of the group, to equip clinicians to effectively use biomarkers in AKI.

  13. Langerhans cell histiocytosis or tuberculosis on a medieval child (Oppidum de la Granède, Millau, France - 10th-11th centuries AD).

    PubMed

    Colombo, Antony; Saint-Pierre, Christophe; Naji, Stephan; Panuel, Michel; Coqueugniot, Hélène; Dutour, Olivier

    2015-06-01

    In 2008, a skeleton of a 1 - 2.5-year-old child radiocarbon dated from the 10th - 11th century AD was discovered on the oppidum of La Granède (Millau, France). It presents multiple cranial osteolytic lesions having punched-out or geographical map-like aspects associated with sequestrum and costal osteitis. A multi 3D digital approach (CT, μCT and virtual reconstruction) enabled us to refine the description and identify the diploic origin of the lytic process. Furthermore, precise observation of the extent of the lesions and associated reorganization of the skeletal micro-structure were possible. From these convergent pieces of evidence, the differential diagnosis led to three possibilities: Langerhans cell histiocytosis, tuberculosis, or Langerhans cell histiocytosis and tuberculosis.

  14. An introduction to benchmarking in healthcare.

    PubMed

    Benson, H R

    1994-01-01

    Benchmarking--the process of establishing a standard of excellence and comparing a business function or activity, a product, or an enterprise as a whole with that standard--will be used increasingly by healthcare institutions to reduce expenses and simultaneously improve product and service quality. As a component of total quality management, benchmarking is a continuous process by which an organization can measure and compare its own processes with those of organizations that are leaders in a particular area. Benchmarking should be viewed as a part of quality management programs, not as a replacement. There are four kinds of benchmarking: internal, competitive, functional and generic. With internal benchmarking, functions within an organization are compared with each other. Competitive benchmarking partners do business in the same market and provide a direct comparison of products or services. Functional and generic benchmarking are performed with organizations which may have a specific similar function, such as payroll or purchasing, but which otherwise are in a different business. Benchmarking must be a team process because the outcome will involve changing current practices, with effects felt throughout the organization. The team should include members who have subject knowledge; communications and computer proficiency; skills as facilitators and outside contacts; and sponsorship of senior management. Benchmarking requires quantitative measurement of the subject. The process or activity that you are attempting to benchmark will determine the types of measurements used. Benchmarking metrics usually can be classified in one of four categories: productivity, quality, time and cost-related.

  15. Gaia FGK benchmark stars: Metallicity

    NASA Astrophysics Data System (ADS)

    Jofré, P.; Heiter, U.; Soubiran, C.; Blanco-Cuaresma, S.; Worley, C. C.; Pancino, E.; Cantat-Gaudin, T.; Magrini, L.; Bergemann, M.; González Hernández, J. I.; Hill, V.; Lardo, C.; de Laverny, P.; Lind, K.; Masseron, T.; Montes, D.; Mucciarelli, A.; Nordlander, T.; Recio Blanco, A.; Sobeck, J.; Sordo, R.; Sousa, S. G.; Tabernero, H.; Vallenari, A.; Van Eck, S.

    2014-04-01

    Context. To calibrate automatic pipelines that determine atmospheric parameters of stars, one needs a sample of stars, or "benchmark stars", with well-defined parameters to be used as a reference. Aims: We provide detailed documentation of the iron abundance determination of the 34 FGK-type benchmark stars that are selected to be the pillars for calibration of the one billion Gaia stars. They cover a wide range of temperatures, surface gravities, and metallicities. Methods: Up to seven different methods were used to analyze an observed spectral library of high resolutions and high signal-to-noise ratios. The metallicity was determined by assuming a value of effective temperature and surface gravity obtained from fundamental relations; that is, these parameters were known a priori and independently from the spectra. Results: We present a set of metallicity values obtained in a homogeneous way for our sample of benchmark stars. In addition to this value, we provide detailed documentation of the associated uncertainties. Finally, we report a value of the metallicity of the cool giant ψ Phe for the first time. Based on NARVAL and HARPS data obtained within the Gaia DPAC (Data Processing and Analysis Consortium) and coordinated by the GBOG (Ground-Based Observations for Gaia) working group and on data retrieved from the ESO-ADP database.Tables 6-76 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/564/A133

  16. NASA Software Engineering Benchmarking Effort

    NASA Technical Reports Server (NTRS)

    Godfrey, Sally; Rarick, Heather

    2012-01-01

    Benchmarking was very interesting and provided a wealth of information (1) We did see potential solutions to some of our "top 10" issues (2) We have an assessment of where NASA stands with relation to other aerospace/defense groups We formed new contacts and potential collaborations (1) Several organizations sent us examples of their templates, processes (2) Many of the organizations were interested in future collaboration: sharing of training, metrics, Capability Maturity Model Integration (CMMI) appraisers, instructors, etc. We received feedback from some of our contractors/ partners (1) Desires to participate in our training; provide feedback on procedures (2) Welcomed opportunity to provide feedback on working with NASA

  17. Tumor Grade

    MedlinePlus

    ... Other Funding Find NCI funding for small business innovation, technology transfer, and contracts Training Cancer Training at ... much of the tumor tissue has normal breast (milk) duct structures Nuclear grade : an evaluation of the ...

  18. [Benchmarking in health care: conclusions and recommendations].

    PubMed

    Geraedts, Max; Selbmann, Hans-Konrad

    2011-01-01

    The German Health Ministry funded 10 demonstration projects and accompanying research of benchmarking in health care. The accompanying research work aimed to infer generalisable findings and recommendations. We performed a meta-evaluation of the demonstration projects and analysed national and international approaches to benchmarking in health care. It was found that the typical benchmarking sequence is hardly ever realised. Most projects lack a detailed analysis of structures and processes of the best performers as a starting point for the process of learning from and adopting best practice. To tap the full potential of benchmarking in health care, participation in voluntary benchmarking projects should be promoted that have been demonstrated to follow all the typical steps of a benchmarking process.

  19. Benchmarking pathology services: implementing a longitudinal study.

    PubMed

    Gordon, M; Holmes, S; McGrath, K; Neil, A

    1999-05-01

    This paper details the benchmarking process and its application to the activities of pathology laboratories participating in a benchmark pilot study [the Royal College of Pathologists of Australasian (RCPA) Benchmarking Project]. The discussion highlights the primary issues confronted in collecting, processing, analysing and comparing benchmark data. The paper outlines the benefits of engaging in a benchmarking exercise and provides a framework which can be applied across a range of public health settings. This information is then applied to a review of the development of the RCPA Benchmarking Project. Consideration is also given to the nature of the preliminary results of the project and the implications of these results to the on-going conduct of the study.

  20. Pynamic: the Python Dynamic Benchmark

    SciTech Connect

    Lee, G L; Ahn, D H; de Supinksi, B R; Gyllenhaal, J C; Miller, P J

    2007-07-10

    Python is widely used in scientific computing to facilitate application development and to support features such as computational steering. Making full use of some of Python's popular features, which improve programmer productivity, leads to applications that access extremely high numbers of dynamically linked libraries (DLLs). As a result, some important Python-based applications severely stress a system's dynamic linking and loading capabilities and also cause significant difficulties for most development environment tools, such as debuggers. Furthermore, using the Python paradigm for large scale MPI-based applications can create significant file IO and further stress tools and operating systems. In this paper, we present Pynamic, the first benchmark program to support configurable emulation of a wide-range of the DLL usage of Python-based applications for large scale systems. Pynamic has already accurately reproduced system software and tool issues encountered by important large Python-based scientific applications on our supercomputers. Pynamic provided insight for our system software and tool vendors, and our application developers, into the impact of several design decisions. As we describe the Pynamic benchmark, we will highlight some of the issues discovered in our large scale system software and tools using Pynamic.

  1. BENCHMARK DOSE TECHNICAL GUIDANCE DOCUMENT ...

    EPA Pesticide Factsheets

    The U.S. EPA conducts risk assessments for an array of health effects that may result from exposure to environmental agents, and that require an analysis of the relationship between exposure and health-related outcomes. The dose-response assessment is essentially a two-step process, the first being the definition of a point of departure (POD), and the second extrapolation from the POD to low environmentally-relevant exposure levels. The benchmark dose (BMD) approach provides a more quantitative alternative to the first step in the dose-response assessment than the current NOAEL/LOAEL process for noncancer health effects, and is similar to that for determining the POD proposed for cancer endpoints. As the Agency moves toward harmonization of approaches for human health risk assessment, the dichotomy between cancer and noncancer health effects is being replaced by consideration of mode of action and whether the effects of concern are likely to be linear or nonlinear at low doses. Thus, the purpose of this project is to provide guidance for the Agency and the outside community on the application of the BMD approach in determining the POD for all types of health effects data, whether a linear or nonlinear low dose extrapolation is used. A guidance document is being developed under the auspices of EPA's Risk Assessment Forum. The purpose of this project is to provide guidance for the Agency and the outside community on the application of the benchmark dose (BMD) appr

  2. Benchmarking for Excellence and the Nursing Process

    NASA Technical Reports Server (NTRS)

    Sleboda, Claire

    1999-01-01

    Nursing is a service profession. The services provided are essential to life and welfare. Therefore, setting the benchmark for high quality care is fundamental. Exploring the definition of a benchmark value will help to determine a best practice approach. A benchmark is the descriptive statement of a desired level of performance against which quality can be judged. It must be sufficiently well understood by managers and personnel in order that it may serve as a standard against which to measure value.

  3. Computational Chemistry Comparison and Benchmark Database

    National Institute of Standards and Technology Data Gateway

    SRD 101 NIST Computational Chemistry Comparison and Benchmark Database (Web, free access)   The NIST Computational Chemistry Comparison and Benchmark Database is a collection of experimental and ab initio thermochemical properties for a selected set of molecules. The goals are to provide a benchmark set of molecules for the evaluation of ab initio computational methods and allow the comparison between different ab initio computational methods for the prediction of thermochemical properties.

  4. Method and system for benchmarking computers

    DOEpatents

    Gustafson, John L.

    1993-09-14

    A testing system and method for benchmarking computer systems. The system includes a store containing a scalable set of tasks to be performed to produce a solution in ever-increasing degrees of resolution as a larger number of the tasks are performed. A timing and control module allots to each computer a fixed benchmarking interval in which to perform the stored tasks. Means are provided for determining, after completion of the benchmarking interval, the degree of progress through the scalable set of tasks and for producing a benchmarking rating relating to the degree of progress for each computer.

  5. Imaging in the Age of Precision Medicine: Summary of the Proceedings of the 10th Biannual Symposium of the International Society for Strategic Studies in Radiology.

    PubMed

    Herold, Christian J; Lewin, Jonathan S; Wibmer, Andreas G; Thrall, James H; Krestin, Gabriel P; Dixon, Adrian K; Schoenberg, Stefan O; Geckle, Rena J; Muellner, Ada; Hricak, Hedvig

    2016-04-01

    During the past decade, with its breakthroughs in systems biology, precision medicine (PM) has emerged as a novel health-care paradigm. Challenging reductionism and broad-based approaches in medicine, PM is an approach for disease treatment and prevention that takes into account individual variability in genes, environment, and lifestyle. It involves integrating information from multiple sources in a holistic manner to achieve a definitive diagnosis, focused treatment, and adequate response assessment. Biomedical imaging and imaging-guided interventions, which provide multiparametric morphologic and functional information and enable focused, minimally invasive treatments, are key elements in the infrastructure needed for PM. The emerging discipline of radiogenomics, which links genotypic information to phenotypic disease manifestations at imaging, should also greatly contribute to patient-tailored care. Because of the growing volume and complexity of imaging data, decision-support algorithms will be required to help physicians apply the most essential patient data for optimal management. These innovations will challenge traditional concepts of health care and business models. Reimbursement policies and quality assurance measures will have to be reconsidered and adapted. In their 10th biannual symposium, which was held in August 2013, the members of the International Society for Strategic Studies in Radiology discussed the opportunities and challenges arising for the imaging community with the transition to PM. This article summarizes the discussions and central messages of the symposium.

  6. Evaluation of elemental status of ancient human bone samples from Northeastern Hungary dated to the 10th century AD by XRF

    NASA Astrophysics Data System (ADS)

    János, I.; Szathmáry, L.; Nádas, E.; Béni, A.; Dinya, Z.; Máthé, E.

    2011-11-01

    The present study is a multielemental analysis of bone samples belonging to skeletal individuals originating from two contemporaneous (10th century AD) cemeteries (Tiszavasvári Nagy-Gyepáros and Nagycserkesz-Nádasibokor sites) in Northeastern Hungary, using the XRF analytical technique. Emitted X-rays were detected in order to determine the elemental composition of bones and to appreciate the possible influence of the burial environment on the elemental content of the human skeletal remains. Lumbar vertebral bodies were used for analysis. Applying the ED(P)XRF technique concentration of the following elements were determined: P, Ca, K, Na, Mg, Al, Cl, Mn, Fe, Zn, Br and Sr. The results indicated post mortem mineral exchange between the burial environment (soil) and bones (e.g. the enhanced levels of Fe and Mn) and referred to diagenetic alteration processes during burials. However, other elements such as Zn, Sr and Br seemed to be accumulated during the past life. On the basis of statistical analysis, clear separation could not be observed between the two excavation sites in their bone elemental concentrations which denoted similar diagenetic influences, environmental conditions. The enhanced levels of Sr might be connected with the past dietary habits, especially consumption of plant food.

  7. The 10th GCC Closed Forum: rejected data, GCP in bioanalysis, extract stability, BAV, processed batch acceptance, matrix stability, critical reagents, ELN and data integrity and counteracting fraud.

    PubMed

    Cape, Stephanie; Islam, Rafiq; Nehls, Corey; Allinson, John; Safavi, Afshin; Bennett, Patrick; Hulse, James; Beaver, Chris; Khan, Masood; Karnik, Shane; Caturla, Maria Cruz; Lowes, Steve; Iordachescu, Adriana; Silvestro, Luigi; Tayyem, Rabab; Shoup, Ron; Mowery, Stephanie; Keyhani, Anahita; Wakefield, Andrea; Li, Yinghe; Zimmer, Jennifer; Torres, Javier; Couerbe, Philippe; Khadang, Ardeshir; Bourdage, James; Hughes, Nicola; Awaiye, Kayode; Matthews, Brent; Fatmi, Saadya; Johnson, Rhonda; Satterwhite, Christina; Yu, Mathilde; Lin, Jenny; Cojocaru, Laura; Fiscella, Michele; Thomas, Eric; Kurylak, Kai; Kamerud, John; Lin, Zhongping John; Garofolo, Wei; Savoie, Natasha; Buonarati, Mike; Boudreau, Nadine; Williard, Clark; Liu, Yansheng; Warrino, Dominic; Kale, Prashant; Adcock, Neil; Shekar, Radha; O'Connor, Edward; Ritzen, Hanna; Sanchez, Christina; Hayes, Roger; Bouhajib, Mohammed; Savu, Simona Rizea; Stouffer, Bruce; Tabler, Edward; Tu, Jing; Briscoe, Chad; der Strate, Barry van; Rhyne, Paul; Conliffe, Phyllis; DuBey, Ira; Yamashita, Jim; Tang, Daniel; Groeber, Elizabeth; Vija, Jenifer; Malone, Michele; Osman, Mohamed

    2017-03-24

    The 10th Global CRO Council (GCC) Closed Forum was held in Orlando, FL, USA on 18 April 2016. In attendance were decision makers from international CRO member companies offering bioanalytical services. The objective of this meeting was for GCC members to meet and discuss scientific and regulatory issues specific to bioanalysis. The issues discussed at this closed forum included reporting data from failed method validation runs, GCP for clinical sample bioanalysis, extracted sample stability, biomarker assay validation, processed batch acceptance criteria, electronic laboratory notebooks and data integrity, Health Canada's Notice regarding replicates in matrix stability evaluations, critical reagents and regulatory approaches to counteract fraud. In order to obtain the pharma perspectives on some of these topics, the first joint CRO-Pharma Scientific Interchange Meeting was held on 12 November 2016, in Denver, Colorado, USA. The five topics discussed at this Interchange meeting were reporting data from failed method validation runs, GCP for clinical sample bioanalysis, extracted sample stability, processed batch acceptance criteria and electronic laboratory notebooks and data integrity. The conclusions from the discussions of these topics at both meetings are included in this report.

  8. Urban and rural infant-feeding practices and health in early medieval Central Europe (9th-10th Century, Czech Republic).

    PubMed

    Kaupová, Sylva; Herrscher, Estelle; Velemínský, Petr; Cabut, Sandrine; Poláček, Lumír; Brůžek, Jaroslav

    2014-12-01

    In the Central European context, the 9th and 10th centuries are well known for rapid cultural and societal changes concerning the development of the economic and political structures of states as well as the adoption of Christianity. A bioarchaeological study based on a subadult skeletal series was conducted to tackle the impact of these changes on infant and young child feeding practices and, consequently, their health in both urban and rural populations. Data on growth and frequency of nonspecific stress indicators of a subadult group aged 0-6 years were analyzed. A subsample of 41 individuals was selected for nitrogen and carbon isotope analyses, applying an intra-individual sampling strategy (bone vs. tooth). The isotopic results attest to a mosaic of food behaviors. In the urban sample, some children may have been weaned during their second year of life, while some others may have still been consuming breast milk substantially up to 4-5 years of age. By contrast, data from the rural sample show more homogeneity, with a gradual cessation of breastfeeding starting after the age of 2 years. Several factors are suggested which may have been responsible for applied weaning strategies. There is no evidence that observed weaning strategies affected the level of biological stress which the urban subadult population had to face compared with the rural subadult population.

  9. The Royal Book by Haly Abbas from the 10th century: one of the earliest illustrations of the surgical approach to skull fractures.

    PubMed

    Aciduman, Ahmet; Arda, Berna; Kahya, Esin; Belen, Deniz

    2010-12-01

    Haly Abbas was one of the pioneering physicians and surgeons of the Eastern world in the 10th century who influenced the Western world by his monumental work, The Royal Book. The book was first partly translated into Latin by Constantinus Africanus in the 11th century without citing the author's name. Haly Abbas was recognized in Europe after full translation of The Royal Book by Stephen of Antioch in 1127. The Royal Book has been accepted as an early source of jerrah-names (surgical books) in the Eastern world. The chapters regarding cranial fractures in Haly Abbas' work include unique management strategies for his period with essential quotations from Paul of Aegina's work Epitome. Both authors preferred free bone flap craniotomy in cranial fractures. Although Paul of Aegina, a Byzantine physician and surgeon, was a connection between ancient traditions and Islamic interpretation, Haly Abbas seemed to play a bridging role between the Roman-Byzantine and the School of Salerno in Europe.

  10. Comparison of Dawn and Dusk Precipitating Electron Energy Populations Shortly After the Initial Shock for the January 10th, 1997 Magnetic Cloud

    NASA Technical Reports Server (NTRS)

    Spann, J.; Germany, G.; Swift, W.; Parks, G.; Brittnacher, M.; Elsen, R.

    1997-01-01

    The observed precipitating electron energy between 0130 UT and 0400 UT of January 10 th, 1997, indicates that there is a more energetic precipitating electron population that appears in the auroral oval at 1800-2200 UT at 030) UT. This increase in energy occurs after the initial shock of the magnetic cloud reaches the Earth (0114 UT) and after faint but dynamic polar cap precipitation has been cleared out. The more energetic population is observed to remain rather constant in MLT through the onset of auroral activity (0330 UT) and to the end of the Polar spacecraft apogee pass. Data from the Ultraviolet Imager LBH long and LBH short images are used to quantify the average energy of the precipitating auroral electrons. The Wind spacecraft located about 100 RE upstream monitored the IMF and plasma parameters during the passing of the cloud. The affects of oblique angle viewing are included in the analysis. Suggestions as to the source of this hot electron population will be presented.

  11. World History, Culture, and Geography: The Modern World. Course Models for the History-Social Science Framework, Grade 10.

    ERIC Educational Resources Information Center

    Prescott, Stephanie, Ed.; And Others

    This resource book is designed to assist teachers in implementing California's history-social science framework at the 10th grade level. The models support implementation at the local level and may be used to plan topics and select resources for professional development and preservice education. This document provides a link between the…

  12. COG validation: SINBAD Benchmark Problems

    SciTech Connect

    Lent, E M; Sale, K E; Buck, R M; Descalle, M

    2004-02-23

    We validated COG, a 3D Monte Carlo radiation transport code, against experimental data and MNCP4C simulations from the Shielding Integral Benchmark Archive Database (SINBAD) compiled by RSICC. We modeled three experiments: the Osaka Nickel and Aluminum sphere experiments conducted at the OKTAVIAN facility, and the liquid oxygen experiment conducted at the FNS facility. COG results are in good agreement with experimental data and generally within a few % of MCNP results. There are several possible sources of discrepancy between MCNP and COG results: (1) the cross-section database versions are different, MCNP uses ENDFB VI 1.1 while COG uses ENDFB VIR7, (2) the code implementations are different, and (3) the models may differ slightly. We also limited the use of variance reduction methods when running the COG version of the problems.

  13. Reconceptualizing Benchmarks for Residency Training

    PubMed Central

    2017-01-01

    Postgraduate medical education (PGME) is currently transitioning to a competency-based framework. This model clarifies the desired outcome of residency training - competence. However, since the popularization of Ericsson's work on the effect of time and deliberate practice on performance level, his findings have been applied in some areas of residency training. Though this may be grounded in a noble effort to maximize patient well-being, it imposes unrealistic expectations on trainees. This work aims to demonstrate the fundamental flaws of this application and therefore the lack of validity in using Ericsson's work to develop training benchmarks at the postgraduate level as well as expose potential harms in doing so.

  14. Benchmarking Multipacting Simulations in VORPAL

    SciTech Connect

    C. Nieter, C. Roark, P. Stoltz, K. Tian

    2009-05-01

    We will present the results of benchmarking simulations run to test the ability of VORPAL to model multipacting processes in Superconducting Radio Frequency structures. VORPAL is an electromagnetic (FDTD) particle-in-cell simulation code originally developed for applications in plasma and beam physics. The addition of conformal boundaries and algorithms for secondary electron emission allow VORPAL to be applied to multipacting processes. We start with simulations of multipacting between parallel plates where there are well understood theoretical predictions for the frequency bands where multipacting is expected to occur. We reproduce the predicted multipacting bands and demonstrate departures from the theoretical predictions when a more sophisticated model of secondary emission is used. Simulations of existing cavity structures developed at Jefferson National Laboratories will also be presented where we compare results from VORPAL to experimental data.

  15. Benchmarking ICRF simulations for ITER

    SciTech Connect

    R. V. Budny, L. Berry, R. Bilato, P. Bonoli, M. Brambilla, R.J. Dumont, A. Fukuyama, R. Harvey, E.F. Jaeger, E. Lerche, C.K. Phillips, V. Vdovin, J. Wright, and members of the ITPA-IOS

    2010-09-28

    Abstract Benchmarking of full-wave solvers for ICRF simulations is performed using plasma profiles and equilibria obtained from integrated self-consistent modeling predictions of four ITER plasmas. One is for a high performance baseline (5.3 T, 15 MA) DT H-mode plasma. The others are for half-field, half-current plasmas of interest for the pre-activation phase with bulk plasma ion species being either hydrogen or He4. The predicted profiles are used by seven groups to predict the ICRF electromagnetic fields and heating profiles. Approximate agreement is achieved for the predicted heating power partitions for the DT and He4 cases. Profiles of the heating powers and electromagnetic fields are compared.

  16. Benchmarking Asteroid-Deflection Experiment

    NASA Astrophysics Data System (ADS)

    Remington, Tane; Bruck Syal, Megan; Owen, John Michael; Miller, Paul L.

    2016-10-01

    An asteroid impacting Earth could have devastating consequences. In preparation to deflect or disrupt one before it reaches Earth, it is imperative to have modeling capabilities that adequately simulate the deflection actions. Code validation is key to ensuring full confidence in simulation results used in an asteroid-mitigation plan. We are benchmarking well-known impact experiments using Spheral, an adaptive smoothed-particle hydrodynamics code, to validate our modeling of asteroid deflection. We describe our simulation results, compare them with experimental data, and discuss what we have learned from our work. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-695540

  17. NASA Software Engineering Benchmarking Study

    NASA Technical Reports Server (NTRS)

    Rarick, Heather L.; Godfrey, Sara H.; Kelly, John C.; Crumbley, Robert T.; Wifl, Joel M.

    2013-01-01

    To identify best practices for the improvement of software engineering on projects, NASA's Offices of Chief Engineer (OCE) and Safety and Mission Assurance (OSMA) formed a team led by Heather Rarick and Sally Godfrey to conduct this benchmarking study. The primary goals of the study are to identify best practices that: Improve the management and technical development of software intensive systems; Have a track record of successful deployment by aerospace industries, universities [including research and development (R&D) laboratories], and defense services, as well as NASA's own component Centers; and Identify candidate solutions for NASA's software issues. Beginning in the late fall of 2010, focus topics were chosen and interview questions were developed, based on the NASA top software challenges. Between February 2011 and November 2011, the Benchmark Team interviewed a total of 18 organizations, consisting of five NASA Centers, five industry organizations, four defense services organizations, and four university or university R and D laboratory organizations. A software assurance representative also participated in each of the interviews to focus on assurance and software safety best practices. Interviewees provided a wealth of information on each topic area that included: software policy, software acquisition, software assurance, testing, training, maintaining rigor in small projects, metrics, and use of the Capability Maturity Model Integration (CMMI) framework, as well as a number of special topics that came up in the discussions. NASA's software engineering practices compared favorably with the external organizations in most benchmark areas, but in every topic, there were ways in which NASA could improve its practices. Compared to defense services organizations and some of the industry organizations, one of NASA's notable weaknesses involved communication with contractors regarding its policies and requirements for acquired software. One of NASA's strengths

  18. Benchmarking Learning and Teaching: Developing a Method

    ERIC Educational Resources Information Center

    Henderson-Smart, Cheryl; Winning, Tracey; Gerzina, Tania; King, Shalinie; Hyde, Sarah

    2006-01-01

    Purpose: To develop a method for benchmarking teaching and learning in response to an institutional need to validate a new program in Dentistry at the University of Sydney, Australia. Design/methodology/approach: After a collaborative partner, University of Adelaide, was identified, the areas of teaching and learning to be benchmarked, PBL…

  19. Beyond Benchmarking: Value-Adding Metrics

    ERIC Educational Resources Information Center

    Fitz-enz, Jac

    2007-01-01

    HR metrics has grown up a bit over the past two decades, moving away from simple benchmarking practices and toward a more inclusive approach to measuring institutional performance and progress. In this article, the acknowledged "father" of human capital performance benchmarking provides an overview of several aspects of today's HR metrics…

  20. Benchmarking can add up for healthcare accounting.

    PubMed

    Czarnecki, M T

    1994-09-01

    In 1993, a healthcare accounting and finance benchmarking survey of hospital and nonhospital organizations gathered statistics about key common performance areas. A low response did not allow for statistically significant findings, but the survey identified performance measures that can be used in healthcare financial management settings. This article explains the benchmarking process and examines some of the 1993 study's findings.

  1. A performance benchmark test for geodynamo simulations

    NASA Astrophysics Data System (ADS)

    Matsui, H.; Heien, E. M.

    2013-12-01

    In the last ten years, a number of numerical dynamo models have successfully represented basic characteristics of the geomagnetic field. As new models and numerical methods continue to be developed, it is important to update and extend benchmarks for testing these models. The first dynamo benchmark of Christensen et al. (2001) was applied to models based on spherical harmonic expansion methods. However, only a few groups have reported results of the dynamo benchmark using local methods (Harder and Hansen, 2005; Matsui and Okuda, 2005; Chan et al., 2007) because of the difficulty treating magnetic boundary conditions based on the local methods. On the other hand, spherical harmonics expansion methods perform poorly on massively parallel computers because global data communications are required for the spherical harmonics expansions to evaluate nonlinear terms. We perform benchmark tests to asses various numerical methods for the next generation of geodynamo simulations. The purpose of this benchmark test is to assess numerical geodynamo models on a massively parallel computational platform. To compare among many numerical methods as possible, we consider the model with the insulated magnetic boundary by Christensen et al. (2001) and with the pseudo vacuum magnetic boundary, because the pseudo vacuum boundaries are implemented easier by using the local method than the magnetic insulated boundaries. In the present study, we consider two kinds of benchmarks, so-called accuracy benchmark and performance benchmark. In the accuracy benchmark, we compare the dynamo models by using modest Ekman and Rayleigh numbers proposed by Christensen et. al. (2001). We investigate a required spatial resolution for each dynamo code to obtain less than 1% difference from the suggested solution of the benchmark test using the two magnetic boundary conditions. In the performance benchmark, we investigate computational performance under the same computational environment. We perform these

  2. A Seafloor Benchmark for 3-dimensional Geodesy

    NASA Astrophysics Data System (ADS)

    Chadwell, C. D.; Webb, S. C.; Nooner, S. L.

    2014-12-01

    We have developed an inexpensive, permanent seafloor benchmark to increase the longevity of seafloor geodetic measurements. The benchmark provides a physical tie to the sea floor lasting for decades (perhaps longer) on which geodetic sensors can be repeatedly placed and removed with millimeter resolution. Global coordinates estimated with seafloor geodetic techniques will remain attached to the benchmark allowing for the interchange of sensors as they fail or become obsolete, or for the sensors to be removed and used elsewhere, all the while maintaining a coherent series of positions referenced to the benchmark. The benchmark has been designed to free fall from the sea surface with transponders attached. The transponder can be recalled via an acoustic command sent from the surface to release from the benchmark and freely float to the sea surface for recovery. The duration of the sensor attachment to the benchmark will last from a few days to a few years depending on the specific needs of the experiment. The recovered sensors are then available to be reused at other locations, or again at the same site in the future. Three pins on the sensor frame mate precisely and unambiguously with three grooves on the benchmark. To reoccupy a benchmark a Remotely Operated Vehicle (ROV) uses its manipulator arm to place the sensor pins into the benchmark grooves. In June 2014 we deployed four benchmarks offshore central Oregon. We used the ROV Jason to successfully demonstrate the removal and replacement of packages onto the benchmark. We will show the benchmark design and its operational capabilities. Presently models of megathrust slip within the Cascadia Subduction Zone (CSZ) are mostly constrained by the sub-aerial GPS vectors from the Plate Boundary Observatory, a part of Earthscope. More long-lived seafloor geodetic measures are needed to better understand the earthquake and tsunami risk associated with a large rupture of the thrust fault within the Cascadia subduction zone

  3. Comparison of problematic behaviours of 10th and 11th year Southern English adolescents. Part 2: Current drink, drug and sexual activity of children with smoking parents.

    PubMed

    Cox, Malcolm; Pritchard, Colin

    2007-01-01

    To determine parental and school influences upon the behaviour and attitudes of adolescents of smoking versus non-smoking parents and of those "liking and disliking" school. Utilising a self-administered confidential standardised questionnaire, a representative sample of Southern English 10th and 11th year secondary school pupils was obtained. Current drink, drug and sexual behaviour were explored and data on adolescents whose parents smoked was extrapolated and compared against adolescents of non-smoking parents. Pupils reporting "liking school" were compared against those "not liking school" and all results statistically analysed. There were 17% smoking mothers [SM] and 23% smoking fathers [SF]. The focus is upon students of SF whose adolescents are significantly more often engaged in substance misuse (38-18%), drinking in pubs (31%-15%), binge drinking (32%-18%), and under-age sexual activity (27%-14%) plus smoking (51%-32%), truanting (43%-23%), vandalism (32%-22%) and stealing (19%-11%). SM students had higher incidence of sexual behaviour (33%-13%) and unprotected sex (21%-6%). Students of smoking parents were less well informed and had significantly more negative attitudes about social behaviour and responsibility. "Liking school" was associated to significantly lower rates of problematic behaviour, which predominately was not related to the social background of the pupils. The smoking father criteria carries a social class bias, nonetheless these parents need to be aware of the particular behaviour of their children and their increased risk. SF do not "cause" the behaviour rather it reflects something of the nature of the adolescent's relationship to parents, school and society.

  4. Early fetal gender determination using real-time PCR analysis of cell-free fetal DNA during 6th-10th weeks of gestation.

    PubMed

    Khorram Khorshid, Hamid Reza; Zargari, Maryam; Sadeghi, Mohammad Reza; Edallatkhah, Haleh; Shahhosseiny, Mohammad Hassan; Kamali, Koorosh

    2013-05-07

    Nowadays, new advances in the use of cell free fetal DNA (cffDNA) in maternal plasma of pregnant women has provided the possibility of applying cffDNA in prenatal diagnosis as a non-invasive method. In contrary to the risks of invasive methods that affect both mother and fetus, applying cffDNA is proven to be highly effective with lower risk. One of the applications of prenatal diagnosis is fetal gender determination, which is important in fetuses at risk of sex-linked genetic diseases. In such cases by obtaining the basic information of the gender, necessary time management can be taken in therapeutic to significantly reduce the necessity of applying the invasive methods. Therefore in this study, the probability of detecting sequences on the human Y-chromosome in pregnant women has been evaluated to identify the gender of fetuses. Peripheral blood samples were obtained from 80 pregnant women with gestational age between 6th to 10th weeks and the fetal DNA was extracted from the plasma. Identification of SRY, DYS14 & DAZ sequences, which are not presentin the maternal genome, was performed using Real-Time PCR. All the obtained results were compared with the actual gender of the newborns to calculate the test accuracy. Considerable 97.3% sensitivity and 97.3% specificity were obtained in fetal gender determination which is significant in the first trimester of pregnancy. Only in one case, false positive result was obtained. Using non-invasive method of cffDNAs in the shortest time possible, as well as avoiding invasive tests for early determination of fetal gender, provides the opportunity of deciding and employing early treatment for fetuses at risk of genetic diseases.

  5. Using Self-Assembled Monolayers to Model Cell Adhesion to the 9th and 10th Type III Domains of Fibronectin†

    PubMed Central

    2009-01-01

    Most mammalian cells must adhere to the extracellular matrix (ECM) to maintain proper growth and development. Fibronectin is a predominant ECM protein that engages integrin cell receptors through its Arg-Gly-Asp (RGD) and Pro-His-Ser-Arg-Asn (PHSRN) peptide binding sites. To study the roles these motifs play in cell adhesion, proteins derived from the 9th (containing PHSRN) and 10th (containing RGD) type III fibronectin domains were engineered to be in frame with cutinase, a serine esterase that forms a site-specific, covalent adduct with phosphonate ligands. Self-assembled monolayers (SAMs) that present phosphonate ligands against an inert background of tri(ethylene glycol) groups were used as model substrates to immobilize the cutinase-fibronectin fusion proteins. Baby hamster kidney cells attached efficiently to all protein surfaces, but only spread efficiently on protein monolayers containing the RGD peptide. Cells on RGD-containing protein surfaces also displayed defined focal adhesions and organized cytoskeletal structures compared to cells on PHSRN-presenting surfaces. Cell attachment and spreading were shown to be unaffected by the presence of PHSRN when compared to RGD alone on SAMs presenting higher densities of protein, but PHSRN supported an increased efficiency in cell attachment when presented at low protein densities with RGD. Treatment of suspended cells with soluble RGD or PHSRN peptides revealed that both peptides were able to inhibit the attachment of FN10 surfaces. These results support a model wherein PHSRN and RGD bind competitively to integrins―rather than a two-point synergistic interaction―and the presence of PHSRN serves to increase the density of ligand on the substrate and therefore enhance the sticking probability of cells during attachment. PMID:20560553

  6. IBC’s 23rd Annual Antibody Engineering, 10th Annual Antibody Therapeutics International Conferences and the 2012 Annual Meeting of The Antibody Society

    PubMed Central

    Klöhn, Peter-Christian; Wuellner, Ulrich; Zizlsperger, Nora; Zhou, Yu; Tavares, Daniel; Berger, Sven; Zettlitz, Kirstin A.; Proetzel, Gabriele; Yong, May; Begent, Richard H.J.; Reichert, Janice M

    2013-01-01

    The 23rd Annual Antibody Engineering, 10th Annual Antibody Therapeutics international conferences, and the 2012 Annual Meeting of The Antibody Society, organized by IBC Life Sciences with contributions from The Antibody Society and two Scientific Advisory Boards, were held December 3–6, 2012 in San Diego, CA. The meeting drew over 800 participants who attended sessions on a wide variety of topics relevant to antibody research and development. As a prelude to the main events, a pre-conference workshop held on December 2, 2012 focused on intellectual property issues that impact antibody engineering. The Antibody Engineering Conference was composed of six sessions held December 3–5, 2012: (1) From Receptor Biology to Therapy; (2) Antibodies in a Complex Environment; (3) Antibody Targeted CNS Therapy: Beyond the Blood Brain Barrier; (4) Deep Sequencing in B Cell Biology and Antibody Libraries; (5) Systems Medicine in the Development of Antibody Therapies/Systematic Validation of Novel Antibody Targets; and (6) Antibody Activity and Animal Models. The Antibody Therapeutics conference comprised four sessions held December 4–5, 2012: (1) Clinical and Preclinical Updates of Antibody-Drug Conjugates; (2) Multifunctional Antibodies and Antibody Combinations: Clinical Focus; (3) Development Status of Immunomodulatory Therapeutic Antibodies; and (4) Modulating the Half-Life of Antibody Therapeutics. The Antibody Society’s special session on applications for recording and sharing data based on GIATE was held on December 5, 2012, and the conferences concluded with two combined sessions on December 5–6, 2012: (1) Development Status of Early Stage Therapeutic Antibodies; and (2) Immunomodulatory Antibodies for Cancer Therapy. PMID:23575266

  7. Benchmarking--Measuring and Comparing for Continuous Improvement.

    ERIC Educational Resources Information Center

    Henczel, Sue

    2002-01-01

    Discussion of benchmarking focuses on the use of internal and external benchmarking by special librarians. Highlights include defining types of benchmarking; historical development; benefits, including efficiency, improved performance, increased competitiveness, and better decision making; problems, including inappropriate adaptation; developing a…

  8. VENUS-2 MOX Core Benchmark: Results of ORNL Calculations Using HELIOS-1.4

    SciTech Connect

    Ellis, RJ

    2001-02-02

    The Task Force on Reactor-Based Plutonium Disposition, now an Expert Group, was set up through the Organization for Economic Cooperation and Development/Nuclear Energy Agency to facilitate technical assessments of burning weapons-grade plutonium mixed-oxide (MOX) fuel in U.S. pressurized-water reactors and Russian VVER nuclear reactors. More than ten countries participated to advance the work of the Task Force in a major initiative, which was a blind benchmark study to compare code benchmark calculations against experimental data for the VENUS-2 MOX core at SCK-CEN in Mol, Belgium. At the Oak Ridge National Laboratory, the HELIOS-1.4 code was used to perform a comprehensive study of pin-cell and core calculations for the VENUS-2 benchmark.

  9. ICSBEP Benchmarks For Nuclear Data Applications

    NASA Astrophysics Data System (ADS)

    Briggs, J. Blair

    2005-05-01

    The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was initiated in 1992 by the United States Department of Energy. The ICSBEP became an official activity of the Organization for Economic Cooperation and Development (OECD) — Nuclear Energy Agency (NEA) in 1995. Representatives from the United States, United Kingdom, France, Japan, the Russian Federation, Hungary, Republic of Korea, Slovenia, Serbia and Montenegro (formerly Yugoslavia), Kazakhstan, Spain, Israel, Brazil, Poland, and the Czech Republic are now participating. South Africa, India, China, and Germany are considering participation. The purpose of the ICSBEP is to identify, evaluate, verify, and formally document a comprehensive and internationally peer-reviewed set of criticality safety benchmark data. The work of the ICSBEP is published as an OECD handbook entitled "International Handbook of Evaluated Criticality Safety Benchmark Experiments." The 2004 Edition of the Handbook contains benchmark specifications for 3331 critical or subcritical configurations that are intended for use in validation efforts and for testing basic nuclear data. New to the 2004 Edition of the Handbook is a draft criticality alarm / shielding type benchmark that should be finalized in 2005 along with two other similar benchmarks. The Handbook is being used extensively for nuclear data testing and is expected to be a valuable resource for code and data validation and improvement efforts for decades to come. Specific benchmarks that are useful for testing structural materials such as iron, chromium, nickel, and manganese; beryllium; lead; thorium; and 238U are highlighted.

  10. Effective File I/O Bandwidth Benchmark

    SciTech Connect

    Rabenseifner, R.; Koniges, A.E.

    2000-02-15

    The effective I/O bandwidth benchmark (b{_}eff{_}io) covers two goals: (1) to achieve a characteristic average number for the I/O bandwidth achievable with parallel MPI-I/O applications, and (2) to get detailed information about several access patterns and buffer lengths. The benchmark examines ''first write'', ''rewrite'' and ''read'' access, strided (individual and shared pointers) and segmented collective patterns on one file per application and non-collective access to one file per process. The number of parallel accessing processes is also varied and well-formed I/O is compared with non-well formed. On systems, meeting the rule that the total memory can be written to disk in 10 minutes, the benchmark should not need more than 15 minutes for a first pass of all patterns. The benchmark is designed analogously to the effective bandwidth benchmark for message passing (b{_}eff) that characterizes the message passing capabilities of a system in a few minutes. First results of the b{_}eff{_}io benchmark are given for IBM SP and Cray T3E systems and compared with existing benchmarks based on parallel Posix-I/O.

  11. Benchmarking Measures of Network Influence

    NASA Astrophysics Data System (ADS)

    Bramson, Aaron; Vandermarliere, Benjamin

    2016-09-01

    Identifying key agents for the transmission of diseases (ideas, technology, etc.) across social networks has predominantly relied on measures of centrality on a static base network or a temporally flattened graph of agent interactions. Various measures have been proposed as the best trackers of influence, such as degree centrality, betweenness, and k-shell, depending on the structure of the connectivity. We consider SIR and SIS propagation dynamics on a temporally-extruded network of observed interactions and measure the conditional marginal spread as the change in the magnitude of the infection given the removal of each agent at each time: its temporal knockout (TKO) score. We argue that this TKO score is an effective benchmark measure for evaluating the accuracy of other, often more practical, measures of influence. We find that none of the network measures applied to the induced flat graphs are accurate predictors of network propagation influence on the systems studied; however, temporal networks and the TKO measure provide the requisite targets for the search for effective predictive measures.

  12. Benchmarking pKa prediction

    PubMed Central

    Davies, Matthew N; Toseland, Christopher P; Moss, David S; Flower, Darren R

    2006-01-01

    Background pKa values are a measure of the protonation of ionizable groups in proteins. Ionizable groups are involved in intra-protein, protein-solvent and protein-ligand interactions as well as solubility, protein folding and catalytic activity. The pKa shift of a group from its intrinsic value is determined by the perturbation of the residue by the environment and can be calculated from three-dimensional structural data. Results Here we use a large dataset of experimentally-determined pKas to analyse the performance of different prediction techniques. Our work provides a benchmark of available software implementations: MCCE, MEAD, PROPKA and UHBD. Combinatorial and regression analysis is also used in an attempt to find a consensus approach towards pKa prediction. The tendency of individual programs to over- or underpredict the pKa value is related to the underlying methodology of the individual programs. Conclusion Overall, PROPKA is more accurate than the other three programs. Key to developing accurate predictive software will be a complete sampling of conformations accessible to protein structures. PMID:16749919

  13. Benchmark problems in computational aeroacoustics

    NASA Technical Reports Server (NTRS)

    Porter-Locklear, Freda

    1994-01-01

    A recent directive at NASA Langley is aimed at numerically predicting principal noise sources. During my summer stay, I worked with high-order ENO code, developed by Dr. Harold Atkins, for solving the unsteady compressible Navier-Stokes equations, as it applies to computational aeroacoustics (CAA). A CAA workshop, composed of six categories of benchmark problems, has been organized to test various numerical properties of code. My task was to determine the robustness of Atkins' code for these test problems. In one category, we tested the nonlinear wave propagation of the code for the one-dimensional Euler equations, with initial pressure, density, and velocity conditions. Using freestream boundary conditions, our results were plausible. In another category, we solved the linearized two-dimensional Euler equations to test the effectiveness of radiation boundary conditions. Here we utilized MAPLE to compute eigenvalues and eigenvectors of the Jacobian given variable and flux vectors. We experienced a minor problem with inflow and outflow boundary conditions. Next, we solved the quasi one dimensional unsteady flow equations with an incoming acoustic wave of amplitude 10(exp -6). The small amplitude sound wave was incident on a convergent-divergent nozzle. After finding a steady-state solution and then marching forward, our solution indicated that after 30 periods the acoustic wave had dissipated (a period is time required for sound wave to traverse one end of nozzle to other end).

  14. Benchmark problems in computational aeroacoustics

    NASA Astrophysics Data System (ADS)

    Porter-Locklear, Freda

    1994-12-01

    A recent directive at NASA Langley is aimed at numerically predicting principal noise sources. During my summer stay, I worked with high-order ENO code, developed by Dr. Harold Atkins, for solving the unsteady compressible Navier-Stokes equations, as it applies to computational aeroacoustics (CAA). A CAA workshop, composed of six categories of benchmark problems, has been organized to test various numerical properties of code. My task was to determine the robustness of Atkins' code for these test problems. In one category, we tested the nonlinear wave propagation of the code for the one-dimensional Euler equations, with initial pressure, density, and velocity conditions. Using freestream boundary conditions, our results were plausible. In another category, we solved the linearized two-dimensional Euler equations to test the effectiveness of radiation boundary conditions. Here we utilized MAPLE to compute eigenvalues and eigenvectors of the Jacobian given variable and flux vectors. We experienced a minor problem with inflow and outflow boundary conditions. Next, we solved the quasi one dimensional unsteady flow equations with an incoming acoustic wave of amplitude 10(exp -6). The small amplitude sound wave was incident on a convergent-divergent nozzle. After finding a steady-state solution and then marching forward, our solution indicated that after 30 periods the acoustic wave had dissipated (a period is time required for sound wave to traverse one end of nozzle to other end).

  15. Benchmarking Measures of Network Influence

    PubMed Central

    Bramson, Aaron; Vandermarliere, Benjamin

    2016-01-01

    Identifying key agents for the transmission of diseases (ideas, technology, etc.) across social networks has predominantly relied on measures of centrality on a static base network or a temporally flattened graph of agent interactions. Various measures have been proposed as the best trackers of influence, such as degree centrality, betweenness, and k-shell, depending on the structure of the connectivity. We consider SIR and SIS propagation dynamics on a temporally-extruded network of observed interactions and measure the conditional marginal spread as the change in the magnitude of the infection given the removal of each agent at each time: its temporal knockout (TKO) score. We argue that this TKO score is an effective benchmark measure for evaluating the accuracy of other, often more practical, measures of influence. We find that none of the network measures applied to the induced flat graphs are accurate predictors of network propagation influence on the systems studied; however, temporal networks and the TKO measure provide the requisite targets for the search for effective predictive measures. PMID:27670635

  16. Benchmarking for Bayesian Reinforcement Learning

    PubMed Central

    Ernst, Damien; Couëtoux, Adrien

    2016-01-01

    In the Bayesian Reinforcement Learning (BRL) setting, agents try to maximise the collected rewards while interacting with their environment while using some prior knowledge that is accessed beforehand. Many BRL algorithms have already been proposed, but the benchmarks used to compare them are only relevant for specific cases. The paper addresses this problem, and provides a new BRL comparison methodology along with the corresponding open source library. In this methodology, a comparison criterion that measures the performance of algorithms on large sets of Markov Decision Processes (MDPs) drawn from some probability distributions is defined. In order to enable the comparison of non-anytime algorithms, our methodology also includes a detailed analysis of the computation time requirement of each algorithm. Our library is released with all source code and documentation: it includes three test problems, each of which has two different prior distributions, and seven state-of-the-art RL algorithms. Finally, our library is illustrated by comparing all the available algorithms and the results are discussed. PMID:27304891

  17. Benchmarking for Bayesian Reinforcement Learning.

    PubMed

    Castronovo, Michael; Ernst, Damien; Couëtoux, Adrien; Fonteneau, Raphael

    2016-01-01

    In the Bayesian Reinforcement Learning (BRL) setting, agents try to maximise the collected rewards while interacting with their environment while using some prior knowledge that is accessed beforehand. Many BRL algorithms have already been proposed, but the benchmarks used to compare them are only relevant for specific cases. The paper addresses this problem, and provides a new BRL comparison methodology along with the corresponding open source library. In this methodology, a comparison criterion that measures the performance of algorithms on large sets of Markov Decision Processes (MDPs) drawn from some probability distributions is defined. In order to enable the comparison of non-anytime algorithms, our methodology also includes a detailed analysis of the computation time requirement of each algorithm. Our library is released with all source code and documentation: it includes three test problems, each of which has two different prior distributions, and seven state-of-the-art RL algorithms. Finally, our library is illustrated by comparing all the available algorithms and the results are discussed.

  18. The Federal Forecasters Conference--1999. Papers and Proceedings (10th, Washington, DC, June 24, 1999) and Selected Papers from the International Symposium on Forecasting (19th, Washington, DC, June 27-30, 1999).

    ERIC Educational Resources Information Center

    Gerald, Debra E., Ed.

    The 10th Federal Forecasters Conference provided a forum where 127 forecasters from different federal agencies and other organizations met to discuss various aspects of the conference's theme, "Forecasting in the New Millennium," that could be applied in the United States. A keynote address, "Procedures for Auditing Federal Forecasts" by J. Scott…

  19. The Internet Time Lag: Anticipating the Long-Term Consequences of the Information Revolution. A Report of the Annual Aspen Institute Roundtable on Information Technology (10th, Aspen, Colorado, August 2-5, 2001).

    ERIC Educational Resources Information Center

    Schwartz, Evan I.

    This is a report of the 10th annual Aspen Institute Roundtable on Information Technology (Aspen, Colorado, August 2-5, 2001). Participants were also polled after the events of September 11, and these comments have been integrated into the report. The mission of this report is to take a wide-ranging look at the trends that are defining the next new…

  20. Developing integrated benchmarks for DOE performance measurement

    SciTech Connect

    Barancik, J.I.; Kramer, C.F.; Thode, Jr. H.C.

    1992-09-30

    The objectives of this task were to describe and evaluate selected existing sources of information on occupational safety and health with emphasis on hazard and exposure assessment, abatement, training, reporting, and control identifying for exposure and outcome in preparation for developing DOE performance benchmarks. Existing resources and methodologies were assessed for their potential use as practical performance benchmarks. Strengths and limitations of current data resources were identified. Guidelines were outlined for developing new or improved performance factors, which then could become the basis for selecting performance benchmarks. Data bases for non-DOE comparison populations were identified so that DOE performance could be assessed relative to non-DOE occupational and industrial groups. Systems approaches were described which can be used to link hazards and exposure, event occurrence, and adverse outcome factors, as needed to generate valid, reliable, and predictive performance benchmarks. Data bases were identified which contain information relevant to one or more performance assessment categories . A list of 72 potential performance benchmarks was prepared to illustrate the kinds of information that can be produced through a benchmark development program. Current information resources which may be used to develop potential performance benchmarks are limited. There is need to develop an occupational safety and health information and data system in DOE, which is capable of incorporating demonstrated and documented performance benchmarks prior to, or concurrent with the development of hardware and software. A key to the success of this systems approach is rigorous development and demonstration of performance benchmark equivalents to users of such data before system hardware and software commitments are institutionalized.

  1. Clinically meaningful performance benchmarks in MS

    PubMed Central

    Motl, Robert W.; Scagnelli, John; Pula, John H.; Sosnoff, Jacob J.; Cadavid, Diego

    2013-01-01

    Objective: Identify and validate clinically meaningful Timed 25-Foot Walk (T25FW) performance benchmarks in individuals living with multiple sclerosis (MS). Methods: Cross-sectional study of 159 MS patients first identified candidate T25FW benchmarks. To characterize the clinical meaningfulness of T25FW benchmarks, we ascertained their relationships to real-life anchors, functional independence, and physiologic measurements of gait and disease progression. Candidate T25FW benchmarks were then prospectively validated in 95 subjects using 13 measures of ambulation and cognition, patient-reported outcomes, and optical coherence tomography. Results: T25FW of 6 to 7.99 seconds was associated with a change in occupation due to MS, occupational disability, walking with a cane, and needing “some help” with instrumental activities of daily living; T25FW ≥8 seconds was associated with collecting Supplemental Security Income and government health care, walking with a walker, and inability to do instrumental activities of daily living. During prospective benchmark validation, we trichotomized data by T25FW benchmarks (<6 seconds, 6–7.99 seconds, and ≥8 seconds) and found group main effects on 12 of 13 objective and subjective measures (p < 0.05). Conclusions: Using a cross-sectional design, we identified 2 clinically meaningful T25FW benchmarks of ≥6 seconds (6–7.99) and ≥8 seconds. Longitudinal and larger studies are needed to confirm the clinical utility and relevance of these proposed T25FW benchmarks and to parse out whether there are additional benchmarks in the lower (<6 seconds) and higher (>10 seconds) ranges of performance. PMID:24174581

  2. Standing adult human phantoms based on 10th, 50th and 90th mass and height percentiles of male and female Caucasian populations

    NASA Astrophysics Data System (ADS)

    Cassola, V. F.; Milian, F. M.; Kramer, R.; de Oliveira Lira, C. A. B.; Khoury, H. J.

    2011-07-01

    Computational anthropomorphic human phantoms are useful tools developed for the calculation of absorbed or equivalent dose to radiosensitive organs and tissues of the human body. The problem is, however, that, strictly speaking, the results can be applied only to a person who has the same anatomy as the phantom, while for a person with different body mass and/or standing height the data could be wrong. In order to improve this situation for many areas in radiological protection, this study developed 18 anthropometric standing adult human phantoms, nine models per gender, as a function of the 10th, 50th and 90th mass and height percentiles of Caucasian populations. The anthropometric target parameters for body mass, standing height and other body measures were extracted from PeopleSize, a well-known software package used in the area of ergonomics. The phantoms were developed based on the assumption of a constant body-mass index for a given mass percentile and for different heights. For a given height, increase or decrease of body mass was considered to reflect mainly the change of subcutaneous adipose tissue mass, i.e. that organ masses were not changed. Organ mass scaling as a function of height was based on information extracted from autopsy data. The methods used here were compared with those used in other studies, anatomically as well as dosimetrically. For external exposure, the results show that equivalent dose decreases with increasing body mass for organs and tissues located below the subcutaneous adipose tissue layer, such as liver, colon, stomach, etc, while for organs located at the surface, such as breasts, testes and skin, the equivalent dose increases or remains constant with increasing body mass due to weak attenuation and more scatter radiation caused by the increasing adipose tissue mass. Changes of standing height have little influence on the equivalent dose to organs and tissues from external exposure. Specific absorbed fractions (SAFs) have also

  3. IBC's 23rd Antibody Engineering and 10th Antibody Therapeutics Conferences and the Annual Meeting of The Antibody Society: December 2-6, 2012, San Diego, CA.

    PubMed

    Marquardt, John; Begent, Richard H J; Chester, Kerry; Huston, James S; Bradbury, Andrew; Scott, Jamie K; Thorpe, Philip E; Veldman, Trudi; Reichert, Janice M; Weiner, Louis M

    2012-01-01

    Now in its 23rd and 10th years, respectively, the Antibody Engineering and Antibody Therapeutics conferences are the Annual Meeting of The Antibody Society. The scientific program covers the full spectrum of challenges in antibody research and development from basic science through clinical development. In this preview of the conferences, the chairs provide their thoughts on sessions that will allow participants to track emerging trends in (1) the development of next-generation immunomodulatory antibodies; (2) the complexity of the environment in which antibodies must function; (3) antibody-targeted central nervous system (CNS) therapies that cross the blood brain barrier; (4) the extension of antibody half-life for improved efficacy and pharmacokinetics (PK)/pharmacodynamics (PD); and (5) the application of next generation DNA sequencing to accelerate antibody research. A pre-conference workshop on Sunday, December 2, 2012 will update participants on recent intellectual property (IP) law changes that affect antibody research, including biosimilar legislation, the America Invents Act and recent court cases. Keynote presentations will be given by Andreas Plückthun (University of Zürich), who will speak on engineering receptor ligands with powerful cellular responses; Gregory Friberg (Amgen Inc.), who will provide clinical updates of bispecific antibodies; James D. Marks (University of California, San Francisco), who will discuss a systems approach to generating tumor targeting antibodies; Dario Neri (Swiss Federal Institute of Technology Zürich), who will speak about delivering immune modulators at the sites of disease; William M. Pardridge (University of California, Los Angeles), who will discuss delivery across the blood-brain barrier; and Peter Senter (Seattle Genetics, Inc.), who will present his vision for the future of antibody-drug conjugates. For more information on these meetings or to register to attend, please visit www

  4. New archeointensity data from French Early Medieval pottery production (6th-10th century AD). Tracing 1500 years of geomagnetic field intensity variations in Western Europe

    NASA Astrophysics Data System (ADS)

    Genevey, Agnès; Gallet, Yves; Jesset, Sébastien; Thébault, Erwan; Bouillon, Jérôme; Lefèvre, Annie; Le Goff, Maxime

    2016-08-01

    Nineteen new archeointensity results were obtained from the analysis of groups of French pottery fragments dated to the Early Middle Ages (6th to 10th centuries AD). They are from several medieval ceramic production sites, excavated mainly in Saran (Central France), and their precise dating was established based on typo-chronological characteristics. Intensity measurements were performed using the Triaxe protocol, which takes into account the effects on the intensity determinations of both thermoremanent magnetization anisotropy and cooling rate. Intensity analyses were also carried out on modern pottery produced at Saran during an experimental firing. The results show very good agreement with the geomagnetic field intensity directly measured inside and around the kiln, thus reasserting the reliability of the Triaxe protocol and the relevance of the quality criteria used. They further demonstrate the potential of the Saran pottery production for archeomagnetism. The new archeointensity results allow a precise and coherent description of the geomagnetic field intensity variations in Western Europe during the Early Medieval period, which was until now poorly documented. They show a significant increase in intensity during the 6th century AD, high intensity values from the 7th to the 9th century, with a minimum of small amplitude at the transition between the 7th and the 8th centuries and finally an important decrease until the beginning of the 11th century. Together with published intensity results available within a radius of 700 km around Paris, the new data were used to compute a master curve of the Western European geomagnetic intensity variations over the past 1500 years. This curve clearly exhibits five intensity maxima: at the transition between the 6th and 7th century AD, at the middle of the 9th century, during the 12th century, in the second part of the 14th century and at the very beginning of the 17th century AD. Some of these peaks are smoothed, or

  5. Updates to the integrated protein-protein interaction benchmarks: Docking benchmark version 5 and affinity benchmark version 2

    PubMed Central

    Vreven, Thom; Moal, Iain H.; Vangone, Anna; Pierce, Brian G.; Kastritis, Panagiotis L.; Torchala, Mieczyslaw; Chaleil, Raphael; Jiménez-García, Brian; Bates, Paul A.; Fernandez-Recio, Juan; Bonvin, Alexandre M.J.J.; Weng, Zhiping

    2015-01-01

    We present an updated and integrated version of our widely used protein-protein docking and binding affinity benchmarks. The benchmarks consist of non-redundant, high quality structures of protein-protein complexes along with the unbound structures of their components. Fifty-five new complexes were added to the docking benchmark, 35 of which have experimentally-measured binding affinities. These updated docking and affinity benchmarks now contain 230 and 179 entries, respectively. In particular, the number of antibody-antigen complexes has increased significantly, by 67% and 74% in the docking and affinity benchmarks, respectively. We tested previously developed docking and affinity prediction algorithms on the new cases. Considering only the top ten docking predictions per benchmark case, a prediction accuracy of 38% is achieved on all 55 cases, and up to 50% for the 32 rigid-body cases only. Predicted affinity scores are found to correlate with experimental binding energies up to r=0.52 overall, and r=0.72 for the rigid complexes. PMID:26231283

  6. Updates to the Integrated Protein-Protein Interaction Benchmarks: Docking Benchmark Version 5 and Affinity Benchmark Version 2.

    PubMed

    Vreven, Thom; Moal, Iain H; Vangone, Anna; Pierce, Brian G; Kastritis, Panagiotis L; Torchala, Mieczyslaw; Chaleil, Raphael; Jiménez-García, Brian; Bates, Paul A; Fernandez-Recio, Juan; Bonvin, Alexandre M J J; Weng, Zhiping

    2015-09-25

    We present an updated and integrated version of our widely used protein-protein docking and binding affinity benchmarks. The benchmarks consist of non-redundant, high-quality structures of protein-protein complexes along with the unbound structures of their components. Fifty-five new complexes were added to the docking benchmark, 35 of which have experimentally measured binding affinities. These updated docking and affinity benchmarks now contain 230 and 179 entries, respectively. In particular, the number of antibody-antigen complexes has increased significantly, by 67% and 74% in the docking and affinity benchmarks, respectively. We tested previously developed docking and affinity prediction algorithms on the new cases. Considering only the top 10 docking predictions per benchmark case, a prediction accuracy of 38% is achieved on all 55 cases and up to 50% for the 32 rigid-body cases only. Predicted affinity scores are found to correlate with experimental binding energies up to r=0.52 overall and r=0.72 for the rigid complexes.

  7. The skyshine benchmark experiment revisited.

    PubMed

    Terry, Ian R

    2005-01-01

    With the coming renaissance of nuclear power, heralded by new nuclear power plant construction in Finland, the issue of qualifying modern tools for calculation becomes prominent. Among the calculations required may be the determination of radiation levels outside the plant owing to skyshine. For example, knowledge of the degree of accuracy in the calculation of gamma skyshine through the turbine hall roof of a BWR plant is important. Modern survey programs which can calculate skyshine dose rates tend to be qualified only by verification with the results of Monte Carlo calculations. However, in the past, exacting experimental work has been performed in the field for gamma skyshine, notably the benchmark work in 1981 by Shultis and co-workers, which considered not just the open source case but also the effects of placing a concrete roof above the source enclosure. The latter case is a better reflection of reality as safety considerations nearly always require the source to be shielded in some way, usually by substantial walls but by a thinner roof. One of the tools developed since that time, which can both calculate skyshine radiation and accurately model the geometrical set-up of an experiment, is the code RANKERN, which is used by Framatome ANP and other organisations for general shielding design work. The following description concerns the use of this code to re-address the experimental results from 1981. This then provides a realistic gauge to validate, but also to set limits on, the program for future gamma skyshine applications within the applicable licensing procedures for all users of the code.

  8. XWeB: The XML Warehouse Benchmark

    NASA Astrophysics Data System (ADS)

    Mahboubi, Hadj; Darmont, Jérôme

    With the emergence of XML as a standard for representing business data, new decision support applications are being developed. These XML data warehouses aim at supporting On-Line Analytical Processing (OLAP) operations that manipulate irregular XML data. To ensure feasibility of these new tools, important performance issues must be addressed. Performance is customarily assessed with the help of benchmarks. However, decision support benchmarks do not currently support XML features. In this paper, we introduce the XML Warehouse Benchmark (XWeB), which aims at filling this gap. XWeB derives from the relational decision support benchmark TPC-H. It is mainly composed of a test data warehouse that is based on a unified reference model for XML warehouses and that features XML-specific structures, and its associate XQuery decision support workload. XWeB's usage is illustrated by experiments on several XML database management systems.

  9. NAS Grid Benchmarks. 1.0

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob; Frumkin, Michael; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    We provide a paper-and-pencil specification of a benchmark suite for computational grids. It is based on the NAS (NASA Advanced Supercomputing) Parallel Benchmarks (NPB) and is called the NAS Grid Benchmarks (NGB). NGB problems are presented as data flow graphs encapsulating an instance of a slightly modified NPB task in each graph node, which communicates with other nodes by sending/receiving initialization data. Like NPB, NGB specifies several different classes (problem sizes). In this report we describe classes S, W, and A, and provide verification values for each. The implementor has the freedom to choose any language, grid environment, security model, fault tolerance/error correction mechanism, etc., as long as the resulting implementation passes the verification test and reports the turnaround time of the benchmark.

  10. Social benchmarking to improve river ecosystems.

    PubMed

    Cary, John; Pisarski, Anne

    2011-01-01

    To complement physical measures or indices of river health a social benchmarking instrument has been developed to measure community dispositions and behaviour regarding river health. This instrument seeks to achieve three outcomes. First, to provide a benchmark of the social condition of communities' attitudes, values, understanding and behaviours in relation to river health; second, to provide information for developing management and educational priorities; and third, to provide an assessment of the long-term effectiveness of community education and engagement activities in achieving changes in attitudes, understanding and behaviours in relation to river health. In this paper the development of the social benchmarking instrument is described and results are presented from the first state-wide benchmark study in Victoria, Australia, in which the social dimensions of river health, community behaviours related to rivers, and community understanding of human impacts on rivers were assessed.

  11. Benchmarking ENDF/B-VII.0

    NASA Astrophysics Data System (ADS)

    van der Marck, Steven C.

    2006-12-01

    The new major release VII.0 of the ENDF/B nuclear data library has been tested extensively using benchmark calculations. These were based upon MCNP-4C3 continuous-energy Monte Carlo neutronics simulations, together with nuclear data processed using the code NJOY. Three types of benchmarks were used, viz., criticality safety benchmarks, (fusion) shielding benchmarks, and reference systems for which the effective delayed neutron fraction is reported. For criticality safety, more than 700 benchmarks from the International Handbook of Criticality Safety Benchmark Experiments were used. Benchmarks from all categories were used, ranging from low-enriched uranium, compound fuel, thermal spectrum ones (LEU-COMP-THERM), to mixed uranium-plutonium, metallic fuel, fast spectrum ones (MIX-MET-FAST). For fusion shielding many benchmarks were based on IAEA specifications for the Oktavian experiments (for Al, Co, Cr, Cu, LiF, Mn, Mo, Si, Ti, W, Zr), Fusion Neutronics Source in Japan (for Be, C, N, O, Fe, Pb), and Pulsed Sphere experiments at Lawrence Livermore National Laboratory (for 6Li, 7Li, Be, C, N, O, Mg, Al, Ti, Fe, Pb, D 2O, H 2O, concrete, polyethylene and teflon). For testing delayed neutron data more than thirty measurements in widely varying systems were used. Among these were measurements in the Tank Critical Assembly (TCA in Japan) and IPEN/MB-01 (Brazil), both with a thermal spectrum, and two cores in Masurca (France) and three cores in the Fast Critical Assembly (FCA, Japan), all with fast spectra. In criticality safety, many benchmarks were chosen from the category with a thermal spectrum, low-enriched uranium, compound fuel (LEU-COMP-THERM), because this is typical of most current-day reactors, and because these benchmarks were previously underpredicted by as much as 0.5% by most nuclear data libraries (such as ENDF/B-VI.8, JEFF-3.0). The calculated results presented here show that this underprediction is no longer there for ENDF/B-VII.0. The average over 257

  12. Public Relations in Accounting: A Benchmark Study.

    ERIC Educational Resources Information Center

    Pincus, J. David; Pincus, Karen V.

    1987-01-01

    Reports on a national study of one segment of the professional services market: the accounting profession. Benchmark data on CPA firms' attitudes toward and uses of public relations are presented and practical and theoretical/research issues are discussed. (JC)

  13. DOE Commercial Building Benchmark Models: Preprint

    SciTech Connect

    Torcelini, P.; Deru, M.; Griffith, B.; Benne, K.; Halverson, M.; Winiarski, D.; Crawley, D. B.

    2008-07-01

    To provide a consistent baseline of comparison and save time conducting such simulations, the U.S. Department of Energy (DOE) has developed a set of standard benchmark building models. This paper will provide an executive summary overview of these benchmark buildings, and how they can save building analysts valuable time. Fully documented and implemented to use with the EnergyPlus energy simulation program, the benchmark models are publicly available and new versions will be created to maintain compatibility with new releases of EnergyPlus. The benchmark buildings will form the basis for research on specific building technologies, energy code development, appliance standards, and measurement of progress toward DOE energy goals. Having a common starting point allows us to better share and compare research results and move forward to make more energy efficient buildings.

  14. Aquatic Life Benchmarks for Pesticide Registration

    EPA Pesticide Factsheets

    Each Aquatic Life Benchmark is based on the most sensitive, scientifically acceptable toxicity endpoint available to EPA for a given taxon (for example, freshwater fish) of all scientifically acceptable toxicity data available to EPA.

  15. Consistency and Magnitude of Differences in Reading Curriculum-Based Measurement Slopes in Benchmark versus Strategic Monitoring

    ERIC Educational Resources Information Center

    Mercer, Sterett H.; Keller-Margulis, Milena A.

    2015-01-01

    Differences in oral reading curriculum-based measurement (R-CBM) slopes based on two commonly used progress monitoring practices in field-based data were compared in this study. Semester-specific R-CBM slopes were calculated for 150 Grade 1 and 2 students who completed benchmark (i.e., 3 R-CBM probes collected 3 times per year) and strategic…

  16. The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS)

    PubMed Central

    Jakab, Andras; Bauer, Stefan; Kalpathy-Cramer, Jayashree; Farahani, Keyvan; Kirby, Justin; Burren, Yuliya; Porz, Nicole; Slotboom, Johannes; Wiest, Roland; Lanczi, Levente; Gerstner, Elizabeth; Weber, Marc-André; Arbel, Tal; Avants, Brian B.; Ayache, Nicholas; Buendia, Patricia; Collins, D. Louis; Cordier, Nicolas; Corso, Jason J.; Criminisi, Antonio; Das, Tilak; Delingette, Hervé; Demiralp, Çağatay; Durst, Christopher R.; Dojat, Michel; Doyle, Senan; Festa, Joana; Forbes, Florence; Geremia, Ezequiel; Glocker, Ben; Golland, Polina; Guo, Xiaotao; Hamamci, Andac; Iftekharuddin, Khan M.; Jena, Raj; John, Nigel M.; Konukoglu, Ender; Lashkari, Danial; Mariz, José António; Meier, Raphael; Pereira, Sérgio; Precup, Doina; Price, Stephen J.; Raviv, Tammy Riklin; Reza, Syed M. S.; Ryan, Michael; Sarikaya, Duygu; Schwartz, Lawrence; Shin, Hoo-Chang; Shotton, Jamie; Silva, Carlos A.; Sousa, Nuno; Subbanna, Nagesh K.; Szekely, Gabor; Taylor, Thomas J.; Thomas, Owen M.; Tustison, Nicholas J.; Unal, Gozde; Vasseur, Flor; Wintermark, Max; Ye, Dong Hye; Zhao, Liang; Zhao, Binsheng; Zikic, Darko; Prastawa, Marcel; Reyes, Mauricio; Van Leemput, Koen

    2016-01-01

    In this paper we report the set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences. Twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low- and high-grade glioma patients—manually annotated by up to four raters—and to 65 comparable scans generated using tumor image simulation software. Quantitative evaluations revealed considerable disagreement between the human raters in segmenting various tumor sub-regions (Dice scores in the range 74%–85%), illustrating the difficulty of this task. We found that different algorithms worked best for different sub-regions (reaching performance comparable to human inter-rater variability), but that no single algorithm ranked in the top for all sub-regions simultaneously. Fusing several good algorithms using a hierarchical majority vote yielded segmentations that consistently ranked above all individual algorithms, indicating remaining opportunities for further methodological improvements. The BRATS image data and manual annotations continue to be publicly available through an online evaluation system as an ongoing benchmarking resource. PMID:25494501

  17. The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS).

    PubMed

    Menze, Bjoern H; Jakab, Andras; Bauer, Stefan; Kalpathy-Cramer, Jayashree; Farahani, Keyvan; Kirby, Justin; Burren, Yuliya; Porz, Nicole; Slotboom, Johannes; Wiest, Roland; Lanczi, Levente; Gerstner, Elizabeth; Weber, Marc-André; Arbel, Tal; Avants, Brian B; Ayache, Nicholas; Buendia, Patricia; Collins, D Louis; Cordier, Nicolas; Corso, Jason J; Criminisi, Antonio; Das, Tilak; Delingette, Hervé; Demiralp, Çağatay; Durst, Christopher R; Dojat, Michel; Doyle, Senan; Festa, Joana; Forbes, Florence; Geremia, Ezequiel; Glocker, Ben; Golland, Polina; Guo, Xiaotao; Hamamci, Andac; Iftekharuddin, Khan M; Jena, Raj; John, Nigel M; Konukoglu, Ender; Lashkari, Danial; Mariz, José Antonió; Meier, Raphael; Pereira, Sérgio; Precup, Doina; Price, Stephen J; Raviv, Tammy Riklin; Reza, Syed M S; Ryan, Michael; Sarikaya, Duygu; Schwartz, Lawrence; Shin, Hoo-Chang; Shotton, Jamie; Silva, Carlos A; Sousa, Nuno; Subbanna, Nagesh K; Szekely, Gabor; Taylor, Thomas J; Thomas, Owen M; Tustison, Nicholas J; Unal, Gozde; Vasseur, Flor; Wintermark, Max; Ye, Dong Hye; Zhao, Liang; Zhao, Binsheng; Zikic, Darko; Prastawa, Marcel; Reyes, Mauricio; Van Leemput, Koen

    2015-10-01

    In this paper we report the set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences. Twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low- and high-grade glioma patients-manually annotated by up to four raters-and to 65 comparable scans generated using tumor image simulation software. Quantitative evaluations revealed considerable disagreement between the human raters in segmenting various tumor sub-regions (Dice scores in the range 74%-85%), illustrating the difficulty of this task. We found that different algorithms worked best for different sub-regions (reaching performance comparable to human inter-rater variability), but that no single algorithm ranked in the top for all sub-regions simultaneously. Fusing several good algorithms using a hierarchical majority vote yielded segmentations that consistently ranked above all individual algorithms, indicating remaining opportunities for further methodological improvements. The BRATS image data and manual annotations continue to be publicly available through an online evaluation system as an ongoing benchmarking resource.

  18. A framework for benchmarking land models

    SciTech Connect

    Luo, Yiqi; Randerson, J.; Abramowitz, G.; Bacour, C.; Blyth, E.; Carvalhais, N.; Ciais, Philippe; Dalmonech, D.; Fisher, J.B.; Fisher, R.; Friedlingstein, P.; Hibbard, Kathleen A.; Hoffman, F. M.; Huntzinger, Deborah; Jones, C.; Koven, C.; Lawrence, David M.; Li, D.J.; Mahecha, M.; Niu, S.L.; Norby, Richard J.; Piao, S.L.; Qi, X.; Peylin, P.; Prentice, I.C.; Riley, William; Reichstein, M.; Schwalm, C.; Wang, Y.; Xia, J. Y.; Zaehle, S.; Zhou, X. H.

    2012-10-09

    Land models, which have been developed by the modeling community in the past few decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land model performances and, meanwhile, highlights major challenges at this infant stage of benchmark analysis. The framework includes (1) targeted aspects of model performance to be evaluated, (2) a set of benchmarks as defined references to test model performance, (3) metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies, and (4) model improvement. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and land surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics in response to climate change across broad temporal and spatial scales. Thus, one major challenge is to select and define a limited number of benchmarks to effectively evaluate land model performance. The second challenge is to develop metrics of measuring mismatches between models and benchmarks. The metrics may include (1) a priori thresholds of acceptable model performance and (2) a scoring system to combine data–model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance to guide future development, thus enabling improved predictions of future states of ecosystems and climate. The near-future research effort should be on development of a set of widely acceptable benchmarks that can be used to objectively, effectively, and reliably evaluate fundamental properties of land models

  19. A framework for benchmarking land models

    SciTech Connect

    Luo, Yiqi; Randerson, James T.; Hoffman, Forrest; Norby, Richard J

    2012-01-01

    Land models, which have been developed by the modeling community in the past few decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land model performances and, meanwhile, highlights major challenges at this infant stage of benchmark analysis. The framework includes (1) targeted aspects of model performance to be evaluated, (2) a set of benchmarks as defined references to test model performance, (3) metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies, and (4) model improvement. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and land surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics in response to climate change across broad temporal and spatial scales. Thus, one major challenge is to select and define a limited number of benchmarks to effectively evaluate land model performance. The second challenge is to develop metrics of measuring mismatches between models and benchmarks. The metrics may include (1) a priori thresholds of acceptable model performance and (2) a scoring system to combine data model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance to guide future development, thus enabling improved predictions of future states of ecosystems and climate. The near-future research effort should be on development of a set of widely acceptable benchmarks that can be used to objectively, effectively, and reliably evaluate fundamental properties of land models

  20. The MCNP6 Analytic Criticality Benchmark Suite

    SciTech Connect

    Brown, Forrest B.

    2016-06-16

    Analytical benchmarks provide an invaluable tool for verifying computer codes used to simulate neutron transport. Several collections of analytical benchmark problems [1-4] are used routinely in the verification of production Monte Carlo codes such as MCNP® [5,6]. Verification of a computer code is a necessary prerequisite to the more complex validation process. The verification process confirms that a code performs its intended functions correctly. The validation process involves determining the absolute accuracy of code results vs. nature. In typical validations, results are computed for a set of benchmark experiments using a particular methodology (code, cross-section data with uncertainties, and modeling) and compared to the measured results from the set of benchmark experiments. The validation process determines bias, bias uncertainty, and possibly additional margins. Verification is generally performed by the code developers, while validation is generally performed by code users for a particular application space. The VERIFICATION_KEFF suite of criticality problems [1,2] was originally a set of 75 criticality problems found in the literature for which exact analytical solutions are available. Even though the spatial and energy detail is necessarily limited in analytical benchmarks, typically to a few regions or energy groups, the exact solutions obtained can be used to verify that the basic algorithms, mathematics, and methods used in complex production codes perform correctly. The present work has focused on revisiting this benchmark suite. A thorough review of the problems resulted in discarding some of them as not suitable for MCNP benchmarking. For the remaining problems, many of them were reformulated to permit execution in either multigroup mode or in the normal continuous-energy mode for MCNP. Execution of the benchmarks in continuous-energy mode provides a significant advance to MCNP verification methods.

  1. Simple Benchmark Specifications for Space Radiation Protection

    NASA Technical Reports Server (NTRS)

    Singleterry, Robert C. Jr.; Aghara, Sukesh K.

    2013-01-01

    This report defines space radiation benchmark specifications. This specification starts with simple, monoenergetic, mono-directional particles on slabs and progresses to human models in spacecraft. This report specifies the models and sources needed to what the team performing the benchmark needs to produce in a report. Also included are brief descriptions of how OLTARIS, the NASA Langley website for space radiation analysis, performs its analysis.

  2. Implementation of NAS Parallel Benchmarks in Java

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; Schultz, Matthew; Jin, Hao-Qiang; Yan, Jerry

    2000-01-01

    A number of features make Java an attractive but a debatable choice for High Performance Computing (HPC). In order to gauge the applicability of Java to the Computational Fluid Dynamics (CFD) we have implemented NAS Parallel Benchmarks in Java. The performance and scalability of the benchmarks point out the areas where improvement in Java compiler technology and in Java thread implementation would move Java closer to Fortran in the competition for CFD applications.

  3. Benchmarking Attosecond Physics with Atomic Hydrogen

    DTIC Science & Technology

    2015-05-25

    Final 3. DATES COVERED (From - To) 12 Mar 12 – 11 Mar 15 4. TITLE AND SUBTITLE Benchmarking attosecond physics with atomic hydrogen 5a...NOTES 14. ABSTRACT The research team obtained uniquely reliable reference data on atomic interactions with intense few-cycle laser pulses...AND SUBTITLE Benchmarking attosecond physics with atomic hydrogen 5a. CONTRACT NUMBER FA2386-12-1-4025 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER

  4. VENUS-2 Experimental Benchmark Analysis

    SciTech Connect

    Pavlovichev, A.M.

    2001-09-28

    The VENUS critical facility is a zero power reactor located at SCK-CEN, Mol, Belgium, which for the VENUS-2 experiment utilized a mixed-oxide core with near-weapons-grade plutonium. In addition to the VENUS-2 Core, additional computational variants based on each type of fuel cycle VENUS-2 core (3.3 wt. % UO{sub 2}, 4.0 wt. % UO{sub 2}, and 2.0/2.7 wt.% MOX) were also calculated. The VENUS-2 critical configuration and cell variants have been calculated with MCU-REA, which is a continuous energy Monte Carlo code system developed at Russian Research Center ''Kurchatov Institute'' and is used extensively in the Fissile Materials Disposition Program. The calculations resulted in a k{sub eff} of 0.99652 {+-} 0.00025 and relative pin powers within 2% for UO{sub 2} pins and 3% for MOX pins of the experimental values.

  5. Benchmarking for Cost Improvement. Final report

    SciTech Connect

    Not Available

    1993-09-01

    The US Department of Energy`s (DOE) Office of Environmental Restoration and Waste Management (EM) conducted the Benchmarking for Cost Improvement initiative with three objectives: Pilot test benchmarking as an EM cost improvement tool; identify areas for cost improvement and recommend actions to address these areas; provide a framework for future cost improvement. The benchmarking initiative featured the use of four principal methods (program classification, nationwide cost improvement survey, paired cost comparison and component benchmarking). Interested parties contributed during both the design and execution phases. The benchmarking initiative was conducted on an accelerated basis. Of necessity, it considered only a limited set of data that may not be fully representative of the diverse and complex conditions found at the many DOE installations. The initiative generated preliminary data about cost differences and it found a high degree of convergence on several issues. Based on this convergence, the report recommends cost improvement strategies and actions. This report describes the steps taken as part of the benchmarking initiative and discusses the findings and recommended actions for achieving cost improvement. The results and summary recommendations, reported below, are organized by the study objectives.

  6. Machine characterization and benchmark performance prediction

    NASA Technical Reports Server (NTRS)

    Saavedra-Barrera, Rafael H.

    1988-01-01

    From runs of standard benchmarks or benchmark suites, it is not possible to characterize the machine nor to predict the run time of other benchmarks which have not been run. A new approach to benchmarking and machine characterization is reported. The creation and use of a machine analyzer is described, which measures the performance of a given machine on FORTRAN source language constructs. The machine analyzer yields a set of parameters which characterize the machine and spotlight its strong and weak points. Also described is a program analyzer, which analyzes FORTRAN programs and determines the frequency of execution of each of the same set of source language operations. It is then shown that by combining a machine characterization and a program characterization, we are able to predict with good accuracy the run time of a given benchmark on a given machine. Characterizations are provided for the Cray-X-MP/48, Cyber 205, IBM 3090/200, Amdahl 5840, Convex C-1, VAX 8600, VAX 11/785, VAX 11/780, SUN 3/50, and IBM RT-PC/125, and for the following benchmark programs or suites: Los Alamos (BMK8A1), Baskett, Linpack, Livermore Loops, Madelbrot Set, NAS Kernels, Shell Sort, Smith, Whetstone and Sieve of Erathostenes.

  7. Benchmarking infrastructure for mutation text mining

    PubMed Central

    2014-01-01

    Background Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. Results We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. Conclusion We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption. PMID:24568600

  8. Pollution prevention opportunity assessment benchmarking: Recommendations for Hanford

    SciTech Connect

    Engel, J.A.

    1994-05-01

    Pollution Prevention Opportunity Assessments (P2OAs) are an important first step in any pollution prevention program. While P2OAs have been and are being conducted at Hanford, there exists no standard guidance, training, tracking, or systematic approach to identifying and addressing the most important waste streams. The purpose of this paper then is to serve as a guide to the Pollution Prevention group at Westinghouse Hanford in developing and implementing P2OAs at Hanford. By searching the literature and benchmarks other sites and agencies, the best elements from those programs can be incorporated and pitfalls more easily avoided. This search began with the 1988 document that introduces P2OAs (then called Process Waste Assessments, PWAS) by the Environmental Protection Agency. This important document presented the basic framework of P20A features which appeared in almost all later programs. Major Department of Energy programs were also examined, with particular attention to the Defense Programs P20A method of a graded approach, as presented at the Kansas City Plant. The graded approach is a system of conducting P2OAs of varying levels of detail depending on the size and importance of the waste stream. Finally, private industry programs were examined briefly. While all the benchmarked programs had excellent features, it was determined that the size and mission of Hanford precluded lifting any one program for use. Thus, a series of recommendations were made, based on the literature review, in order to begin an extensive program of P2OAs at Hanford. These recommendations are in the areas of: facility Pollution Prevention teams, P20A scope and methodology, guidance documents, training for facilities (and management), technical and informational support, tracking and measuring success, and incentives.

  9. Implementing Guided Reading Strategies with Kindergarten and First Grade Students

    ERIC Educational Resources Information Center

    Abbott, Lindsey; Dornbush, Abby; Giddings, Anne; Thomas, Jennifer

    2012-01-01

    In the action research project report, the teacher researchers found that many kindergarten and first-grade students did not have the reading readiness skills to be reading at their benchmark target. The purpose of the project was to improve the students overall reading ability. The dates of the project began on September 8 through December 20,…

  10. 42 CFR 440.330 - Benchmark health benefits coverage.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Benchmark health benefits coverage. 440.330 Section 440.330 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is...

  11. 42 CFR 440.330 - Benchmark health benefits coverage.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 4 2013-10-01 2013-10-01 false Benchmark health benefits coverage. 440.330 Section 440.330 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is...

  12. 42 CFR 440.330 - Benchmark health benefits coverage.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 4 2014-10-01 2014-10-01 false Benchmark health benefits coverage. 440.330 Section 440.330 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is...

  13. 42 CFR 440.330 - Benchmark health benefits coverage.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 4 2012-10-01 2012-10-01 false Benchmark health benefits coverage. 440.330 Section 440.330 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is...

  14. 42 CFR 440.330 - Benchmark health benefits coverage.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 4 2011-10-01 2011-10-01 false Benchmark health benefits coverage. 440.330 Section 440.330 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is...

  15. Impacts of Parental Education on Substance Use: Differences among White, African-American, and Hispanic Students in 8th, 10th, and 12th Grades (1999-2008). Monitoring the Future Occasional Paper Series. Paper No. 70

    ERIC Educational Resources Information Center

    Bachman, Jerald G.; O'Malley, Patrick M.; Johnston, Lloyd D.; Schulenberg, John E.

    2010-01-01

    The Monitoring the Future (MTF) project reports annually on levels and trends in self-reported substance use by secondary school students (e.g., Johnston, O'Malley, Bachman, & Schulenberg, 2009). The reports include subgroup comparisons, and these have revealed substantial differences among race/ethnicity groups, as well as some differences…

  16. Mock Tribunal in Action: Mock International Criminal Tribunal for the Former Yugoslavia. 10th Grade Lesson. Schools of California Online Resources for Education (SCORE): Connecting California's Classrooms to the World.

    ERIC Educational Resources Information Center

    Fix, Terrance

    In this lesson, students role-play as members of the International Criminal Tribunal for the former Yugoslavia that will bring to trial "Persons Responsible for Serious Violations of International Humanitarian Law." Students represent the following groups: International Criminal Tribunal; Prosecution; Defense; Serbians; Croatians;…

  17. Kauffman Teen Survey. An Annual Report on Teen Health Behaviors: Use of Alcohol, Tobacco, and Other Drugs among 8th-, 10th-, and 12th-Grade Students in Greater Kansas City, 1991-92 to 2000-01.

    ERIC Educational Resources Information Center

    Ewing Marion Kauffman Foundation, Kansas City, MO.

    The Ewing Marion Kauffman Foundation began surveying Kansas City area teens during the 1984-85 school year. The Kauffman Teen Survey now addresses two sets of issues for teens. Teen Health Behaviors, addressed in this report, have been a focus of the survey since its inception. The report focuses on teen use of alcohol, tobacco, and other drugs in…

  18. Storage-Intensive Supercomputing Benchmark Study

    SciTech Connect

    Cohen, J; Dossa, D; Gokhale, M; Hysom, D; May, J; Pearce, R; Yoo, A

    2007-10-30

    Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe: (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows: SuperMicro X7

  19. The Construction and Validation of an Instrument to Assess Teachers' Opinions of Methods of Teaching Poetry to Tenth Grade Students of Average Ability.

    ERIC Educational Resources Information Center

    Gallo, Donald Robert

    This study attempted to construct an instrument--the Poetry Methods Rating Scale (PMRS)--for assessing 10th-grade teachers' opinions of poetry teaching methods and to validate it by comparing the scores on the PMRS to the teachers' attitudes, personality, performance, and success in the classroom. The PMRS (a 38-item, seven category…

  20. The Turn of the Century. Tenth Grade Lesson. Schools of California Online Resources for Education (SCORE): Connecting California's Classrooms to the World.

    ERIC Educational Resources Information Center

    Bartels, Dede

    In this 10th grade social studies and language arts interdisciplinary unit, students research and report on historical figures from the turn of the 20th century. Students are required to work in pairs to learn about famous and common individuals, including Andrew Carnegie, Samuel Gompers, Susan B. Anthony, Thomas Edison, Theodore Roosevelt, Booker…

  1. easyCBM Beginning Reading Measures: Grades K-1 Alternate Form Reliability and Criterion Validity with the SAT-10. Technical Report #1403

    ERIC Educational Resources Information Center

    Wray, Kraig; Lai, Cheng-Fei; Sáez, Leilani; Alonzo, Julie; Tindal, Gerald

    2013-01-01

    We report the results of an alternate form reliability and criterion validity study of kindergarten and grade 1 (N = 84-199) reading measures from the easyCBM© assessment system and Stanford Early School Achievement Test/Stanford Achievement Test, 10th edition (SESAT/SAT-­10) across 5 time points. The alternate form reliabilities ranged from…

  2. Grade Retention: Elementary Teacher Perceptions for Students with and without Disabilities

    ERIC Educational Resources Information Center

    Renaud, Gia

    2010-01-01

    In this era of education accountability, teachers are looking closely at grade level requirements and assessment of student performance. Grade retention is being considered for both students with and without disabilities if they are not meeting end of the year achievement benchmarks. Although research has shown that retention is not the best…

  3. Administrative simplification: change to the compliance date for the International Classification of Diseases, 10th Revision (ICD-10-CM and ICD-10-PCS) medical data code sets. Final rule.

    PubMed

    2014-08-04

    This final rule implements section 212 of the Protecting Access to Medicare Act of 2014 by changing the compliance date for the International Classification of Diseases, 10th Revision, Clinical Modification (ICD-10-CM) for diagnosis coding, including the Official ICD-10-CM Guidelines for Coding and Reporting, and the International Classification of Diseases, 10th Revision, Procedure Coding System (ICD-10-PCS) for inpatient hospital procedure coding, including the Official ICD-10-PCS Guidelines for Coding and Reporting, from October 1, 2014 to October 1, 2015. It also requires the continued use of the International Classification of Diseases, 9th Revision, Clinical Modification, Volumes 1 and 2 (diagnoses), and 3 (procedures) (ICD-9-CM), including the Official ICD-9-CM Guidelines for Coding and Reporting, through September 30, 2015.

  4. Benchmarking local healthcare-associated infections: available benchmarks and interpretation challenges.

    PubMed

    El-Saed, Aiman; Balkhy, Hanan H; Weber, David J

    2013-10-01

    Growing numbers of healthcare facilities are routinely collecting standardized data on healthcare-associated infection (HAI), which can be used not only to track internal performance but also to compare local data to national and international benchmarks. Benchmarking overall (crude) HAI surveillance metrics without accounting or adjusting for potential confounders can result in misleading conclusions. Methods commonly used to provide risk-adjusted metrics include multivariate logistic regression analysis, stratification, indirect standardization, and restrictions. The characteristics of recognized benchmarks worldwide, including the advantages and limitations are described. The choice of the right benchmark for the data from the Gulf Cooperation Council (GCC) states is challenging. The chosen benchmark should have similar data collection and presentation methods. Additionally, differences in surveillance environments including regulations should be taken into consideration when considering such a benchmark. The GCC center for infection control took some steps to unify HAI surveillance systems in the region. GCC hospitals still need to overcome legislative and logistic difficulties in sharing data to create their own benchmark. The availability of a regional GCC benchmark may better enable health care workers and researchers to obtain more accurate and realistic comparisons.

  5. The Concepts "Benchmarks and Benchmarking" Used in Education Planning: Teacher Education as Example

    ERIC Educational Resources Information Center

    Steyn, H. J.

    2015-01-01

    Planning in education is a structured activity that includes several phases and steps that take into account several kinds of information (Steyn, Steyn, De Waal & Wolhuter, 2002: 146). One of the sets of information that are usually considered is the (so-called) "benchmarks" and "benchmarking" regarding the focus of a…

  6. Toxicological benchmarks for wildlife: 1994 Revision

    SciTech Connect

    Opresko, D.M.; Sample, B.E.; Suter, G.W. II

    1994-09-01

    The process by which ecological risks of environmental contaminants are evaluated is two-tiered. The first tier is a screening assessment where concentrations of contaminants in the environment are compared to toxicological benchmarks which represent concentrations of chemicals in environmental media (water, sediment, soil, food, etc.) that are presumed to be nonhazardous to the surrounding biota. The second tier is a baseline ecological risk assessment where toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. The report presents toxicological benchmarks for assessment of effects of 76 chemicals on 8 representative mammalian wildlife species and 31 chemicals on 9 avian wildlife species. The chemicals are some of those that occur at United States Department of Energy waste sites; the wildlife species were chosen because they are widely distributed and provide a representative range of body sizes and diets. Further descriptions of the chosen wildlife species and chemicals are provided in the report. The benchmarks presented in this report represent values believed to be nonhazardous for the listed wildlife species. These benchmarks only consider contaminant exposure through oral ingestion of contaminated media; exposure through inhalation or direct dermal exposure are not considered in this report.

  7. Benchmarking Image Matching for Surface Description

    NASA Astrophysics Data System (ADS)

    Haala, Norbert; Stößel, Wolfgang; Gruber, Michael; Pfeifer, Norbert; Fritsch, Dieter

    2013-04-01

    Semi Global Matching algorithms have encompassed a renaissance to process stereoscopic data sets for surface reconstructions. This method is capable to provide very dense point clouds with sampling distances close to the Ground Sampling Resolution (GSD) of aerial images. EuroSDR, the pan-European organization of Spatial Data Research has initiated a benchmark for dense image matching. The expected outcomes of this benchmark are assessments for suitability, quality measures for dense surface reconstructions and run-time aspects. In particular, aerial image blocks of two sites covering two types of landscapes (urban and rural) are analysed. The benchmark' participants provide their results with respect to several criteria. As a follow-up an overall evaluation is given. Finally, point clouds of rural and urban surfaces delivered by very dense image matching algorithms and software packages are presented and results are compared.

  8. Standardized benchmarking in the quest for orthologs.

    PubMed

    Altenhoff, Adrian M; Boeckmann, Brigitte; Capella-Gutierrez, Salvador; Dalquen, Daniel A; DeLuca, Todd; Forslund, Kristoffer; Huerta-Cepas, Jaime; Linard, Benjamin; Pereira, Cécile; Pryszcz, Leszek P; Schreiber, Fabian; da Silva, Alan Sousa; Szklarczyk, Damian; Train, Clément-Marie; Bork, Peer; Lecompte, Odile; von Mering, Christian; Xenarios, Ioannis; Sjölander, Kimmen; Jensen, Lars Juhl; Martin, Maria J; Muffato, Matthieu; Gabaldón, Toni; Lewis, Suzanna E; Thomas, Paul D; Sonnhammer, Erik; Dessimoz, Christophe

    2016-05-01

    Achieving high accuracy in orthology inference is essential for many comparative, evolutionary and functional genomic analyses, yet the true evolutionary history of genes is generally unknown and orthologs are used for very different applications across phyla, requiring different precision-recall trade-offs. As a result, it is difficult to assess the performance of orthology inference methods. Here, we present a community effort to establish standards and an automated web-based service to facilitate orthology benchmarking. Using this service, we characterize 15 well-established inference methods and resources on a battery of 20 different benchmarks. Standardized benchmarking provides a way for users to identify the most effective methods for the problem at hand, sets a minimum requirement for new tools and resources, and guides the development of more accurate orthology inference methods.

  9. OpenSHMEM Implementation of HPCG Benchmark

    SciTech Connect

    Powers, Sarah S; Imam, Neena

    2016-01-01

    We describe the effort to implement the HPCG benchmark using OpenSHMEM and MPI one-sided communication. Unlike the High Performance LINPACK (HPL) benchmark that places em- phasis on large dense matrix computations, the HPCG benchmark is dominated by sparse operations such as sparse matrix-vector product, sparse matrix triangular solve, and long vector operations. The MPI one-sided implementation is developed using the one-sided OpenSHMEM implementation. Pre- liminary results comparing the original MPI, OpenSHMEM, and MPI one-sided implementations on an SGI cluster, Cray XK7 and Cray XC30 are presented. The results suggest the MPI, OpenSHMEM, and MPI one-sided implementations all obtain similar overall performance but the MPI one-sided im- plementation seems to slightly increase the run time for multigrid preconditioning in HPCG on the Cray XK7 and Cray XC30.

  10. NAS Parallel Benchmarks, Multi-Zone Versions

    NASA Technical Reports Server (NTRS)

    vanderWijngaart, Rob F.; Haopiang, Jin

    2003-01-01

    We describe an extension of the NAS Parallel Benchmarks (NPB) suite that involves solving the application benchmarks LU, BT and SP on collections of loosely coupled discretization meshes. The solutions on the meshes are updated independently, but after each time step they exchange boundary value information. This strategy, which is common among structured-mesh production flow solver codes in use at NASA Ames and elsewhere, provides relatively easily exploitable coarse-grain parallelism between meshes. Since the individual application benchmarks also allow fine-grain parallelism themselves, this NPB extension, named NPB Multi-Zone (NPB-MZ), is a good candidate for testing hybrid and multi-level parallelization tools and strategies.

  11. Coral benchmarks in the center of biodiversity.

    PubMed

    Licuanan, W Y; Robles, R; Dygico, M; Songco, A; van Woesik, R

    2017-01-30

    There is an urgent need to quantify coral reef benchmarks that assess changes and recovery rates through time and serve as goals for management. Yet, few studies have identified benchmarks for hard coral cover and diversity in the center of marine diversity. In this study, we estimated coral cover and generic diversity benchmarks on the Tubbataha reefs, the largest and best-enforced no-take marine protected area in the Philippines. The shallow (2-6m) reef slopes of Tubbataha were monitored annually, from 2012 to 2015, using hierarchical sampling. Mean coral cover was 34% (σ±1.7) and generic diversity was 18 (σ±0.9) per 75m by 25m station. The southeastern leeward slopes supported on average 56% coral cover, whereas the northeastern windward slopes supported 30%, and the western slopes supported 18% coral cover. Generic diversity was more spatially homogeneous than coral cover.

  12. Benchmarking criticality safety calculations with subcritical experiments

    SciTech Connect

    Mihalczo, J.T.

    1984-06-01

    Calculation of the neutron multiplication factor at delayed criticality may be necessary for benchmarking calculations but it may not be sufficient. The use of subcritical experiments to benchmark criticality safety calculations could result in substantial savings in fuel material costs for experiments. In some cases subcritical configurations could be used to benchmark calculations where sufficient fuel to achieve delayed criticality is not available. By performing a variety of measurements with subcritical configurations, much detailed information can be obtained which can be compared directly with calculations. This paper discusses several measurements that can be performed with subcritical assemblies and presents examples that include comparisons between calculation and experiment where possible. Where not, examples from critical experiments have been used but the measurement methods could also be used for subcritical experiments.

  13. Benchmark field study of deep neutron penetration

    SciTech Connect

    Morgan, J.F.; Sale, K. ); Gold, R.; Roberts, J.H.; Preston, C.C. )

    1991-06-10

    A unique benchmark neutron field has been established at the Lawrence Livermore National Laboratory (LLNL) to study deep penetration neutron transport. At LLNL, a tandem accelerator is used to generate a monoenergetic neutron source that permits investigation of deep neutron penetration under conditions that are virtually ideal to model, namely the transport of mono-energetic neutrons through a single material in a simple geometry. General features of the Lawrence Tandem (LATAN) benchmark field are described with emphasis on neutron source characteristics and room return background. The single material chosen for the first benchmark, LATAN-1, is a steel representative of Light Water Reactor (LWR) Pressure Vessels (PV). Also included is a brief description of the Little Boy replica, a critical reactor assembly designed to mimic the radiation doses from the atomic bomb dropped on Hiroshima, and its us in neutron spectrometry. 18 refs.

  14. Outlier Benchmark Systems With Gaia Primaries

    NASA Astrophysics Data System (ADS)

    Marocco, Federico; Pinfield, David J.; Montes, David; Zapatero Osorio, Maria Rosa; Smart, Richard L.; Cook, Neil J.; Caballero, José A.; Jones, Hugh, R. A.; Lucas, Phil W.

    2016-07-01

    Benchmark systems are critical to assisting sub-stellar physics. While the known population of benchmarks hasincreased significantly in recent years, large portions of the age-metallicity parameter space remain unexplored.Gaia will expand enormously the pool of well characterized primary stars, and our simulations show that we couldpotentially have access to more than 6000 benchmark systems out to 300 pc, allowing us to whittle down thesenbsp;systems into a large sample with outlier properties that will reveal the nature of ultra-cool dwarfs in rare parameternbsp;space. In this contribution we present the preliminary results from our effort to identify and characterize ultra-coolnbsp;companions to Gaia-imaged stars with unusual values of metallicity. Since these systems are intrinsically rare, wenbsp;expand the volume probed by targeting faint, low-proper motion systems.nbsp;/p>

  15. Los Alamos National Laboratory computer benchmarking 1982

    SciTech Connect

    Martin, J.L.

    1983-06-01

    Evaluating the performance of computing machinery is a continual effort of the Computer Research and Applications Group of the Los Alamos National Laboratory. This report summarizes the results of the group's benchmarking activities performed between October 1981 and September 1982, presenting compilation and execution times as well as megaflop rates for a set of benchmark codes. Tests were performed on the following computers: Cray Research, Inc. (CRI) Cray-1S; Control Data Corporation (CDC) 7600, 6600, Cyber 73, Cyber 825, Cyber 835, Cyber 855, and Cyber 205; Digital Equipment Corporation (DEC) VAX 11/780 and VAX 11/782; and Apollo Computer, Inc., Apollo.

  16. Toxicological benchmarks for wildlife: 1996 Revision

    SciTech Connect

    Sample, B.E.; Opresko, D.M.; Suter, G.W., II

    1996-06-01

    The purpose of this report is to present toxicological benchmarks for assessment of effects of certain chemicals on mammalian and avian wildlife species. Publication of this document meets a milestone for the Environmental Restoration (ER) Risk Assessment Program. This document provides the ER Program with toxicological benchmarks that may be used as comparative tools in screening assessments as well as lines of evidence to support or refute the presence of ecological effects in ecological risk assessments. The chemicals considered in this report are some that occur at US DOE waste sites, and the wildlife species evaluated herein were chosen because they represent a range of body sizes and diets.

  17. Overview of TPC Benchmark E: The Next Generation of OLTP Benchmarks

    NASA Astrophysics Data System (ADS)

    Hogan, Trish

    Set to replace the aging TPC-C, the TPC Benchmark E is the next generation OLTP benchmark, which more accurately models client database usage. TPC-E addresses the shortcomings of TPC-C. It has a much more complex workload, requires the use of RAID-protected storage, generates much less I/O, and is much cheaper and easier to set up, run, and audit. After a period of overlap, it is expected that TPC-E will become the de facto OLTP benchmark.

  18. Benchmarking 2011: Trends in Education Philanthropy

    ERIC Educational Resources Information Center

    Grantmakers for Education, 2011

    2011-01-01

    The analysis in "Benchmarking 2011" is based on data from an unduplicated sample of 184 education grantmaking organizations--approximately two-thirds of Grantmakers for Education's (GFE's) network of grantmakers--who responded to an online survey consisting of fixed-choice and open-ended questions. Because a different subset of funders elects to…

  19. What Is the Impact of Subject Benchmarking?

    ERIC Educational Resources Information Center

    Pidcock, Steve

    2006-01-01

    The introduction of subject benchmarking led to fears of increased external intervention in the activities of universities and a more restrictive view of institutional autonomy, accompanied by an undermining of the academic profession, particularly through the perceived threat of the introduction of a national curriculum for higher education. For…

  20. MHEC Survey Establishes Midwest Property Insurance Benchmarks.

    ERIC Educational Resources Information Center

    Midwestern Higher Education Commission Risk Management Institute Research Bulletin, 1994

    1994-01-01

    This publication presents the results of a survey of over 200 midwestern colleges and universities on their property insurance programs and establishes benchmarks to help these institutions evaluate their insurance programs. Findings included the following: (1) 51 percent of respondents currently purchase their property insurance as part of a…

  1. Benchmark Generation and Simulation at Extreme Scale

    SciTech Connect

    Lagadapati, Mahesh; Mueller, Frank; Engelmann, Christian

    2016-01-01

    The path to extreme scale high-performance computing (HPC) poses several challenges related to power, performance, resilience, productivity, programmability, data movement, and data management. Investigating the performance of parallel applications at scale on future architectures and the performance impact of different architectural choices is an important component of HPC hardware/software co-design. Simulations using models of future HPC systems and communication traces from applications running on existing HPC systems can offer an insight into the performance of future architectures. This work targets technology developed for scalable application tracing of communication events. It focuses on extreme-scale simulation of HPC applications and their communication behavior via lightweight parallel discrete event simulation for performance estimation and evaluation. Instead of simply replaying a trace within a simulator, this work promotes the generation of a benchmark from traces. This benchmark is subsequently exposed to simulation using models to reflect the performance characteristics of future-generation HPC systems. This technique provides a number of benefits, such as eliminating the data intensive trace replay and enabling simulations at different scales. The presented work features novel software co-design aspects, combining the ScalaTrace tool to generate scalable trace files, the ScalaBenchGen tool to generate the benchmark, and the xSim tool to assess the benchmark characteristics within a simulator.

  2. Standardised Benchmarking in the Quest for Orthologs

    PubMed Central

    Altenhoff, Adrian M.; Boeckmann, Brigitte; Capella-Gutierrez, Salvador; Dalquen, Daniel A.; DeLuca, Todd; Forslund, Kristoffer; Huerta-Cepas, Jaime; Linard, Benjamin; Pereira, Cécile; Pryszcz, Leszek P.; Schreiber, Fabian; Sousa da Silva, Alan; Szklarczyk, Damian; Train, Clément-Marie; Bork, Peer; Lecompte, Odile; von Mering, Christian; Xenarios, Ioannis; Sjölander, Kimmen; Juhl Jensen, Lars; Martin, Maria J.; Muffato, Matthieu; Gabaldón, Toni; Lewis, Suzanna E.; Thomas, Paul D.; Sonnhammer, Erik; Dessimoz, Christophe

    2016-01-01

    The identification of evolutionarily related genes across different species—orthologs in particular—forms the backbone of many comparative, evolutionary, and functional genomic analyses. Achieving high accuracy in orthology inference is thus essential. Yet the true evolutionary history of genes, required to ascertain orthology, is generally unknown. Furthermore, orthologs are used for very different applications across different phyla, with different requirements in terms of the precision-recall trade-off. As a result, assessing the performance of orthology inference methods remains difficult for both users and method developers. Here, we present a community effort to establish standards in orthology benchmarking and facilitate orthology benchmarking through an automated web-based service (http://orthology.benchmarkservice.org). Using this new service, we characterise the performance of 15 well-established orthology inference methods and resources on a battery of 20 different benchmarks. Standardised benchmarking provides a way for users to identify the most effective methods for the problem at hand, sets a minimal requirement for new tools and resources, and guides the development of more accurate orthology inference methods. PMID:27043882

  3. Benchmark Problems for Spacecraft Formation Flying Missions

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell; Leitner, Jesse A.; Burns, Richard D.; Folta, David C.

    2003-01-01

    To provide high-level focus to distributed space system flight dynamics and control research, several benchmark problems are suggested. These problems are not specific to any current or proposed mission, but instead are intended to capture high-level features that would be generic to many similar missions.

  4. Benchmarking Year Five Students' Reading Abilities

    ERIC Educational Resources Information Center

    Lim, Chang Kuan; Eng, Lin Siew; Mohamed, Abdul Rashid

    2014-01-01

    Reading and understanding a written text is one of the most important skills in English learning.This study attempts to benchmark Year Five students' reading abilities of fifteen rural schools in a district in Malaysia. The objectives of this study are to develop a set of standardised written reading comprehension and a set of indicators to inform…

  5. 2010 Recruiting Benchmarks Survey. Research Brief

    ERIC Educational Resources Information Center

    National Association of Colleges and Employers (NJ1), 2010

    2010-01-01

    The National Association of Colleges and Employers conducted its annual survey of employer members from June 15, 2010 to August 15, 2010, to benchmark data relevant to college recruiting. From a base of 861 employers holding organizational membership, there were 268 responses for a response rate of 31 percent. Following are some of the major…

  6. A MULTIMODEL APPROACH FOR CALCULATING BENCHMARK DOSE

    EPA Science Inventory


    A Multimodel Approach for Calculating Benchmark Dose
    Ramon I. Garcia and R. Woodrow Setzer

    In the assessment of dose response, a number of plausible dose- response models may give fits that are consistent with the data. If no dose response formulation had been speci...

  7. Cleanroom Energy Efficiency: Metrics and Benchmarks

    SciTech Connect

    International SEMATECH Manufacturing Initiative; Mathew, Paul A.; Tschudi, William; Sartor, Dale; Beasley, James

    2010-07-07

    Cleanrooms are among the most energy-intensive types of facilities. This is primarily due to the cleanliness requirements that result in high airflow rates and system static pressures, as well as process requirements that result in high cooling loads. Various studies have shown that there is a wide range of cleanroom energy efficiencies and that facility managers may not be aware of how energy efficient their cleanroom facility can be relative to other cleanroom facilities with the same cleanliness requirements. Metrics and benchmarks are an effective way to compare one facility to another and to track the performance of a given facility over time. This article presents the key metrics and benchmarks that facility managers can use to assess, track, and manage their cleanroom energy efficiency or to set energy efficiency targets for new construction. These include system-level metrics such as air change rates, air handling W/cfm, and filter pressure drops. Operational data are presented from over 20 different cleanrooms that were benchmarked with these metrics and that are part of the cleanroom benchmark dataset maintained by Lawrence Berkeley National Laboratory (LBNL). Overall production efficiency metrics for cleanrooms in 28 semiconductor manufacturing facilities in the United States and recorded in the Fabs21 database are also presented.

  8. Challenges and Benchmarks in Bioimage Analysis.

    PubMed

    Kozubek, Michal

    2016-01-01

    Similar to the medical imaging community, the bioimaging community has recently realized the need to benchmark various image analysis methods to compare their performance and assess their suitability for specific applications. Challenges sponsored by prestigious conferences have proven to be an effective means of encouraging benchmarking and new algorithm development for a particular type of image data. Bioimage analysis challenges have recently complemented medical image analysis challenges, especially in the case of the International Symposium on Biomedical Imaging (ISBI). This review summarizes recent progress in this respect and describes the general process of designing a bioimage analysis benchmark or challenge, including the proper selection of datasets and evaluation metrics. It also presents examples of specific target applications and biological research tasks that have benefited from these challenges with respect to the performance of automatic image analysis methods that are crucial for the given task. Finally, available benchmarks and challenges in terms of common features, possible classification and implications drawn from the results are analysed.

  9. Benchmarking in Universities: League Tables Revisited

    ERIC Educational Resources Information Center

    Turner, David

    2005-01-01

    This paper examines the practice of benchmarking universities using a "league table" approach. Taking the example of the "Sunday Times University League Table", the author reanalyses the descriptive data on UK universities. Using a linear programming technique, data envelope analysis (DEA), the author uses the re-analysis to…

  10. Algorithm and Architecture Independent Benchmarking with SEAK

    SciTech Connect

    Tallent, Nathan R.; Manzano Franco, Joseph B.; Gawande, Nitin A.; Kang, Seung-Hwa; Kerbyson, Darren J.; Hoisie, Adolfy; Cross, Joseph

    2016-05-23

    Many applications of high performance embedded computing are limited by performance or power bottlenecks. We have designed the Suite for Embedded Applications & Kernels (SEAK), a new benchmark suite, (a) to capture these bottlenecks in a way that encourages creative solutions; and (b) to facilitate rigorous, objective, end-user evaluation for their solutions. To avoid biasing solutions toward existing algorithms, SEAK benchmarks use a mission-centric (abstracted from a particular algorithm) and goal-oriented (functional) specification. To encourage solutions that are any combination of software or hardware, we use an end-user black-box evaluation that can capture tradeoffs between performance, power, accuracy, size, and weight. The tradeoffs are especially informative for procurement decisions. We call our benchmarks future proof because each mission-centric interface and evaluation remains useful despite shifting algorithmic preferences. It is challenging to create both concise and precise goal-oriented specifications for mission-centric problems. This paper describes the SEAK benchmark suite and presents an evaluation of sample solutions that highlights power and performance tradeoffs.

  11. Benchmarking 2010: Trends in Education Philanthropy

    ERIC Educational Resources Information Center

    Bearman, Jessica

    2010-01-01

    "Benchmarking 2010" offers insights into the current priorities, practices and concerns of education grantmakers. The report is divided into five sections: (1) Mapping the Education Grantmaking Landscape; (2) 2010 Funding Priorities; (3) Strategies for Leveraging Greater Impact; (4) Identifying Significant Trends in Education Funding; and (5)…

  12. Seven Benchmarks for Information Technology Investment.

    ERIC Educational Resources Information Center

    Smallen, David; Leach, Karen

    2002-01-01

    Offers benchmarks to help campuses evaluate their efforts in supplying information technology (IT) services. The first three help understand the IT budget, the next three provide insight into staffing levels and emphases, and the seventh relates to the pervasiveness of institutional infrastructure. (EV)

  13. Benchmarking Peer Production Mechanisms, Processes & Practices

    ERIC Educational Resources Information Center

    Fischer, Thomas; Kretschmer, Thomas

    2008-01-01

    This deliverable identifies key approaches for quality management in peer production by benchmarking peer production practices and processes in other areas. (Contains 29 footnotes, 13 figures and 2 tables.)[This report has been authored with contributions of: Kaisa Honkonen-Ratinen, Matti Auvinen, David Riley, Jose Pinzon, Thomas Fischer, Thomas…

  14. Resection of complex pancreatic injuries: Benchmarking postoperative complications using the Accordion classification

    PubMed Central

    Krige, Jake E; Jonas, Eduard; Thomson, Sandie R; Kotze, Urda K; Setshedi, Mashiko; Navsaria, Pradeep H; Nicol, Andrew J

    2017-01-01

    AIM To benchmark severity of complications using the Accordion Severity Grading System (ASGS) in patients undergoing operation for severe pancreatic injuries. METHODS A prospective institutional database of 461 patients with pancreatic injuries treated from 1990 to 2015 was reviewed. One hundred and thirty patients with AAST grade 3, 4 or 5 pancreatic injuries underwent resection (pancreatoduodenectomy, n = 20, distal pancreatectomy, n = 110), including 30 who had an initial damage control laparotomy (DCL) and later definitive surgery. AAST injury grades, type of pancreatic resection, need for DCL and incidence and ASGS severity of complications were assessed. Uni- and multivariate logistic regression analysis was applied. RESULTS Overall 238 complications occurred in 95 (73%) patients of which 73% were ASGS grades 3-6. Nineteen patients (14.6%) died. Patients more likely to have complications after pancreatic resection were older, had a revised trauma score (RTS) < 7.8, were shocked on admission, had grade 5 injuries of the head and neck of the pancreas with associated vascular and duodenal injuries, required a DCL, received a larger blood transfusion, had a pancreatoduodenectomy (PD) and repeat laparotomies. Applying univariate logistic regression analysis, mechanism of injury, RTS < 7.8, shock on admission, DCL, increasing AAST grade and type of pancreatic resection were significant variables for complications. Multivariate logistic regression analysis however showed that only age and type of pancreatic resection (PD) were significant. CONCLUSION This ASGS-based study benchmarked postoperative morbidity after pancreatic resection for trauma. The detailed outcome analysis provided may serve as a reference for future institutional comparisons.

  15. Finding a benchmark for monitoring hospital cleanliness.

    PubMed

    Mulvey, D; Redding, P; Robertson, C; Woodall, C; Kingsmore, P; Bedwell, D; Dancer, S J

    2011-01-01

    This study evaluated three methods for monitoring hospital cleanliness. The aim was to find a benchmark that could indicate risk to patients from a contaminated environment. We performed visual monitoring, ATP bioluminescence and microbiological screening of five clinical surfaces before and after detergent-based cleaning on two wards over a four-week period. Five additional sites that were not featured in the routine domestic specification were also sampled. Measurements from all three methods were integrated and compared in order to choose appropriate levels for routine monitoring. We found that visual assessment did not reflect ATP values nor environmental contamination with microbial flora including Staphylococcus aureus and meticillin-resistant S. aureus (MRSA). There was a relationship between microbial growth categories and the proportion of ATP values exceeding a chosen benchmark but neither reliably predicted the presence of S. aureus or MRSA. ATP values were occasionally diverse. Detergent-based cleaning reduced levels of organic soil by 32% (95% confidence interval: 16-44%; P<0.001) but did not necessarily eliminate indicator staphylococci, some of which survived the cleaning process. An ATP benchmark value of 100 relative light units offered the closest correlation with microbial growth levels <2.5 cfu/cm(2) (receiver operating characteristic ROC curve sensitivity: 57%; specificity: 57%). In conclusion, microbiological and ATP monitoring confirmed environmental contamination, persistence of hospital pathogens and measured the effect on the environment from current cleaning practices. This study has provided provisional benchmarks to assist with future assessment of hospital cleanliness. Further work is required to refine practical sampling strategy and choice of benchmarks.

  16. 42 CFR 425.602 - Establishing the benchmark.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... computing an ACO's fixed historical benchmark that is adjusted for historical growth and beneficiary... program. (2) Makes separate expenditure calculations for each of the following populations of... making up the historical benchmark, determines national growth rates and trends expenditures for...

  17. 42 CFR 425.602 - Establishing the benchmark.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... computing an ACO's fixed historical benchmark that is adjusted for historical growth and beneficiary... program. (2) Makes separate expenditure calculations for each of the following populations of... making up the historical benchmark, determines national growth rates and trends expenditures for...

  18. 42 CFR 425.602 - Establishing the benchmark.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... computing an ACO's fixed historical benchmark that is adjusted for historical growth and beneficiary... program. (2) Makes separate expenditure calculations for each of the following populations of... making up the historical benchmark, determines national growth rates and trends expenditures for...

  19. Using Benchmarking To Influence Tuition and Fee Decisions.

    ERIC Educational Resources Information Center

    Hubbell, Loren W. Loomis; Massa, Robert J.; Lapovsky, Lucie

    2002-01-01

    Discusses the use of benchmarking in managing enrollment. Using a case study, illustrates how benchmarking can help administrators develop strategies for planning and implementing admissions and pricing practices. (EV)

  20. Results Oriented Benchmarking: The Evolution of Benchmarking at NASA from Competitive Comparisons to World Class Space Partnerships

    NASA Technical Reports Server (NTRS)

    Bell, Michael A.

    1999-01-01

    Informal benchmarking using personal or professional networks has taken place for many years at the Kennedy Space Center (KSC). The National Aeronautics and Space Administration (NASA) recognized early on, the need to formalize the benchmarking process for better utilization of resources and improved benchmarking performance. The need to compete in a faster, better, cheaper environment has been the catalyst for formalizing these efforts. A pioneering benchmarking consortium was chartered at KSC in January 1994. The consortium known as the Kennedy Benchmarking Clearinghouse (KBC), is a collaborative effort of NASA and all major KSC contractors. The charter of this consortium is to facilitate effective benchmarking, and leverage the resulting quality improvements across KSC. The KBC acts as a resource with experienced facilitators and a proven process. One of the initial actions of the KBC was to develop a holistic methodology for Center-wide benchmarking. This approach to Benchmarking integrates the best features of proven benchmarking models (i.e., Camp, Spendolini, Watson, and Balm). This cost-effective alternative to conventional Benchmarking approaches has provided a foundation for consistent benchmarking at KSC through the development of common terminology, tools, and techniques. Through these efforts a foundation and infrastructure has been built which allows short duration benchmarking studies yielding results gleaned from world class partners that can be readily implemented. The KBC has been recognized with the Silver Medal Award (in the applied research category) from the International Benchmarking Clearinghouse.

  1. VENUS-2 MOX Core Benchmark: Results of ORNL Calculations Using HELIOS-1.4 - Revised Report

    SciTech Connect

    Ellis, RJ

    2001-06-01

    The Task Force on Reactor-Based Plutonium Disposition (TFRPD) was formed by the Organization for Economic Cooperation and Development/Nuclear Energy Agency (OECD/NEA) to study reactor physics, fuel performance, and fuel cycle issues related to the disposition of weapons-grade (WG) plutonium as mixed-oxide (MOX) reactor fuel. To advance the goals of the TFRPD, 10 countries and 12 institutions participated in a major TFRPD activity: a blind benchmark study to compare code calculations to experimental data for the VENUS-2 MOX core at SCK-CEN in Mol, Belgium. At Oak Ridge National Laboratory, the HELIOS-1.4 code system was used to perform the comprehensive study of pin-cell and MOX core calculations for the VENUS-2 MOX core benchmark study.

  2. EDITORIAL: Selected papers from the 10th International Workshop on Micro and Nanotechnology for Power Generation and Energy Conversion Applications (PowerMEMS 2010) Selected papers from the 10th International Workshop on Micro and Nanotechnology for Power Generation and Energy Conversion Applications (PowerMEMS 2010)

    NASA Astrophysics Data System (ADS)

    Reynaerts, Dominiek; Vullers, Ruud

    2011-10-01

    This special section of Journal of Micromechanics and Microengineering features papers selected from the 10th International Workshop on Micro and Nanotechnology for Power Generation and Energy Conversion Applications (PowerMEMS 2010). The workshop was organized in Leuven, Belgium from 30 November to 3 December 2010 by Katholieke Universiteit Leuven and the imec/Holst Centre. This was a special PowerMEMS Workshop, for several reasons. First of all, we celebrated the 10th anniversary of the workshop: the first PowerMEMS meeting was organized in Sendai, Japan in 2000. None of the organizers or participants of this first meeting could have predicted the impact of the workshop over the next decade. The second reason was that, for the first time, the conference organization spanned two countries: Belgium and the Netherlands. Thanks to the advances in information technology, teams from Katholieke Universiteit Leuven (Belgium) and the imec/Holst Centre in Eindhoven (the Netherlands) have been able to work together seamlessly as one team. The objective of the PowerMEMS Workshop is to stimulate innovation in micro and nanotechnology for power generation and energy conversion applications. Its scope ranges from integrated microelectromechanical systems (MEMS) for power generation, dissipation, harvesting, and management, to novel nanostructures and materials for energy-related applications. True to the objective of the PowerMEMSWorkshop, the 2010 technical program covered a broad range of energy related research, ranging from the nanometer to the millimeter scale, discussed in 5 invited and 52 oral presentations, and 112 posters. This special section includes 14 papers covering vibration energy harvesters, thermal applications and micro power systems. Finally, we wish to express sincere appreciation to the members of the International Steering Committee, the Technical Program Committee and last but not least the Local Organizing Committee. This special issue was edited in

  3. Introduction to the special issue on the joint meeting of the 19th IEEE International Symposium on the Applications of Ferroelectrics and the 10th European Conference on the Applications of Polar Dielectrics.

    PubMed

    Tsurumi, Takaaki

    2011-09-01

    The joint meeting of the 19th IEEE International Symposium on the Applications of Ferroelectrics and the 10th European Conference on the Applications of Polar Dielectrics took place in Edinburgh from August 9-12, 2010. The conference was attended by 390 delegates from more than 40 different countries. There were 4 plenary speakers, 56 invited speakers, and a further 222 contributed oral presentations in 7 parallel session. In addition there were 215 poster presentations. Key topics addressed at the conference included piezoelectric materials, leadfree piezoelectrics, and multiferroics.

  4. Unstructured Adaptive (UA) NAS Parallel Benchmark. Version 1.0

    NASA Technical Reports Server (NTRS)

    Feng, Huiyu; VanderWijngaart, Rob; Biswas, Rupak; Mavriplis, Catherine

    2004-01-01

    We present a complete specification of a new benchmark for measuring the performance of modern computer systems when solving scientific problems featuring irregular, dynamic memory accesses. It complements the existing NAS Parallel Benchmark suite. The benchmark involves the solution of a stylized heat transfer problem in a cubic domain, discretized on an adaptively refined, unstructured mesh.

  5. Taking Stock of Corporate Benchmarking Practices: Panacea or Pandora's Box?

    ERIC Educational Resources Information Center

    Fleisher, Craig S.; Burton, Sara

    1995-01-01

    Discusses why corporate communications/public relations (cc/pr) should be benchmarked (an approach used by cc/pr managers to demonstrate the value of their activities to skeptical organizational executives). Discusses myths about cc/pr benchmarking; types, targets, and focus of cc/pr benchmarking; a process model; and critical decisions about…

  6. 42 CFR 422.258 - Calculation of benchmarks.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 3 2010-10-01 2010-10-01 false Calculation of benchmarks. 422.258 Section 422.258... and Plan Approval § 422.258 Calculation of benchmarks. (a) The term “MA area-specific non-drug monthly... the plan bids. (c) Calculation of MA regional non-drug benchmark amount. CMS calculates the...

  7. Discovering and Implementing Best Practices to Strengthen SEAs: Collaborative Benchmarking

    ERIC Educational Resources Information Center

    Building State Capacity and Productivity Center, 2013

    2013-01-01

    This paper is written for state educational agency (SEA) leaders who are considering the benefits of collaborative benchmarking, and it addresses the following questions: (1) What does benchmarking of best practices entail?; (2) How does "collaborative benchmarking" enhance the process?; (3) How do SEAs control the process so that "their" needs…

  8. Asterisk Grade Study Report.

    ERIC Educational Resources Information Center

    Kokorsky, Eileen A.

    A study was conducted at Passaic County Community College (PCCC) to investigate the operation of a grading system which utilized an asterisk (*) grade to indicate progress in a course until a letter grade was assigned. The study sought to determine the persistence of students receiving the "*" grade, the incidence of cases of students receiving…

  9. Social Studies, Grade 10, World Studies: Western Civilization--History and Culture. Course of Study and Related Learning Activities, Preliminary Materials. Curriculum Bulletin, 1968-69 Series, No. 5.

    ERIC Educational Resources Information Center

    New York City Board of Education, Brooklyn, NY. Bureau of Curriculum Development.

    This 10th-grade curriculum guide provides an in-depth study of the revolutions which accompanied the rise of modern Europe and shaped European cultural patterns. Eight "themes" with content outlines are developed: (1) the emergence of modern Europe from the Renaissance to the commercial and scientific revolutions, (2) the growth of democracy, (3)…

  10. YOUNG WOMEN IN VIRGINIA, A 10-YEAR FOLLOW-UP STUDY OF GIRLS ENROLLED IN 1954-55 IN THE TENTH GRADE IN VIRGINIA HIGH SCHOOLS. A RESEARCH CONTRIBUTION TO EDUCATIONAL PLANNING, VOL. 49, NO. 1.

    ERIC Educational Resources Information Center

    JORDAN, BETH C.; LOVING, ROSA H.

    THE PURPOSES OF THIS STUDY WERE TO DETERMINE THE NEEDS FOR STRENGTHENING THE HOMEMAKING PROGRAM AND FOR PLANNING PROGRAMS TO PREPARE YOUNG WOMEN FOR OCCUPATIONS USING HOME ECONOMICS SKILLS AND KNOWLEDGE. HOME ECONOMICS TEACHERS COMPLETED DATA SHEETS FOR 2,679 OF THE 20,000 10TH GRADE GIRLS IN VIRGINIA SCHOOLS IN 1954-55. QUESTIONNAIRES WERE SENT…

  11. ASBench: benchmarking sets for allosteric discovery.

    PubMed

    Huang, Wenkang; Wang, Guanqiao; Shen, Qiancheng; Liu, Xinyi; Lu, Shaoyong; Geng, Lv; Huang, Zhimin; Zhang, Jian

    2015-08-01

    Allostery allows for the fine-tuning of protein function. Targeting allosteric sites is gaining increasing recognition as a novel strategy in drug design. The key challenge in the discovery of allosteric sites has strongly motivated the development of computational methods and thus high-quality, publicly accessible standard data have become indispensable. Here, we report benchmarking data for experimentally determined allosteric sites through a complex process, including a 'Core set' with 235 unique allosteric sites and a 'Core-Diversity set' with 147 structurally diverse allosteric sites. These benchmarking sets can be exploited to develop efficient computational methods to predict unknown allosteric sites in proteins and reveal unique allosteric ligand-protein interactions to guide allosteric drug design.

  12. Recommendations for Benchmarking Preclinical Studies of Nanomedicines.

    PubMed

    Dawidczyk, Charlene M; Russell, Luisa M; Searson, Peter C

    2015-10-01

    Nanoparticle-based delivery systems provide new opportunities to overcome the limitations associated with traditional small-molecule drug therapy for cancer and to achieve both therapeutic and diagnostic functions in the same platform. Preclinical trials are generally designed to assess therapeutic potential and not to optimize the design of the delivery platform. Consequently, progress in developing design rules for cancer nanomedicines has been slow, hindering progress in the field. Despite the large number of preclinical trials, several factors restrict comparison and benchmarking of different platforms, including variability in experimental design, reporting of results, and the lack of quantitative data. To solve this problem, we review the variables involved in the design of preclinical trials and propose a protocol for benchmarking that we recommend be included in in vivo preclinical studies of drug-delivery platforms for cancer therapy. This strategy will contribute to building the scientific knowledge base that enables development of design rules and accelerates the translation of new technologies.

  13. A new tool for benchmarking cardiovascular fluoroscopes.

    PubMed

    Balter, S; Heupler, F A; Lin, P J; Wondrow, M H

    2001-01-01

    This article reports the status of a new cardiovascular fluoroscopy benchmarking phantom. A joint working group of the Society for Cardiac Angiography and Interventions (SCA&I) and the National Electrical Manufacturers Association (NEMA) developed the phantom. The device was adopted as NEMA standard XR 21-2000, "Characteristics of and Test Procedures for a Phantom to Benchmark Cardiac Fluoroscopic and Photographic Performance," in August 2000. The test ensemble includes imaging field geometry, spatial resolution, low-contrast iodine detectability, working thickness range, visibility of moving targets, and phantom entrance dose. The phantom tests systems under conditions simulating normal clinical use for fluoroscopically guided invasive and interventional procedures. Test procedures rely on trained human observers.

  14. Toxicological benchmarks for wildlife. Environmental Restoration Program

    SciTech Connect

    Opresko, D.M.; Sample, B.E.; Suter, G.W.

    1993-09-01

    This report presents toxicological benchmarks for assessment of effects of 55 chemicals on six representative mammalian wildlife species (short-tailed shrew, white-footed mouse, cottontail ink, red fox, and whitetail deer) and eight avian wildlife species (American robin, woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, Cooper`s hawk, and redtailed hawk) (scientific names are presented in Appendix C). These species were chosen because they are widely distributed and provide a representative range of body sizes and diets. The chemicals are some of those that occur at United States Department of Energy (DOE) waste sites. The benchmarks presented in this report are values believed to be nonhazardous for the listed wildlife species.

  15. Parton distribution benchmarking with LHC data

    NASA Astrophysics Data System (ADS)

    Ball, Richard D.; Carrazza, Stefano; Del Debbio, Luigi; Forte, Stefano; Gao, Jun; Hartland, Nathan; Huston, Joey; Nadolsky, Pavel; Rojo, Juan; Stump, Daniel; Thorne, Robert S.; Yuan, C.-P.

    2013-04-01

    We present a detailed comparison of the most recent sets of NNLO PDFs from the ABM, CT, HERAPDF, MSTW and NNPDF collaborations. We compare parton distributions at low and high scales and parton luminosities relevant for LHC phenomenology. We study the PDF dependence of LHC benchmark inclusive cross sections and differential distributions for electroweak boson and jet production in the cases in which the experimental covariance matrix is available. We quantify the agreement between data and theory by computing the χ 2 for each data set with all the various PDFs. PDF comparisons are performed consistently for common values of the strong coupling. We also present a benchmark comparison of jet production at the LHC, comparing the results from various available codes and scale settings. Finally, we discuss the implications of the updated NNLO PDF sets for the combined PDF+ α s uncertainty in the gluon fusion Higgs production cross section.

  16. Specification for the VERA Depletion Benchmark Suite

    SciTech Connect

    Kim, Kang Seog

    2015-12-17

    CASL-X-2015-1014-000 iii Consortium for Advanced Simulation of LWRs EXECUTIVE SUMMARY The CASL neutronics simulator MPACT is under development for the neutronics and T-H coupled simulation for the pressurized water reactor. MPACT includes the ORIGEN-API and internal depletion module to perform depletion calculations based upon neutron-material reaction and radioactive decay. It is a challenge to validate the depletion capability because of the insufficient measured data. One of the detoured methods to validate it is to perform a code-to-code comparison for benchmark problems. In this study a depletion benchmark suite has been developed and a detailed guideline has been provided to obtain meaningful computational outcomes which can be used in the validation of the MPACT depletion capability.

  17. Benchmark On Sensitivity Calculation (Phase III)

    SciTech Connect

    Ivanova, Tatiana; Laville, Cedric; Dyrda, James; Mennerdahl, Dennis; Golovko, Yury; Raskach, Kirill; Tsiboulia, Anatoly; Lee, Gil Soo; Woo, Sweng-Woong; Bidaud, Adrien; Patel, Amrit; Bledsoe, Keith C; Rearden, Bradley T; Gulliford, J.

    2012-01-01

    The sensitivities of the keff eigenvalue to neutron cross sections have become commonly used in similarity studies and as part of the validation algorithm for criticality safety assessments. To test calculations of the sensitivity coefficients, a benchmark study (Phase III) has been established by the OECD-NEA/WPNCS/EG UACSA (Expert Group on Uncertainty Analysis for Criticality Safety Assessment). This paper presents some sensitivity results generated by the benchmark participants using various computational tools based upon different computational methods: SCALE/TSUNAMI-3D and -1D, MONK, APOLLO2-MORET 5, DRAGON-SUSD3D and MMKKENO. The study demonstrates the performance of the tools. It also illustrates how model simplifications impact the sensitivity results and demonstrates the importance of 'implicit' (self-shielding) sensitivities. This work has been a useful step towards verification of the existing and developed sensitivity analysis methods.

  18. Assessing and benchmarking multiphoton microscopes for biologists

    PubMed Central

    Corbin, Kaitlin; Pinkard, Henry; Peck, Sebastian; Beemiller, Peter; Krummel, Matthew F.

    2017-01-01

    Multiphoton microscopy has become staple tool for tracking cells within tissues and organs due to superior depth of penetration, low excitation volumes, and reduced phototoxicity. Many factors, ranging from laser pulse width to relay optics to detectors and electronics, contribute to the overall ability of these microscopes to excite and detect fluorescence deep within tissues. However, we have found that there are few standard ways already described in the literature to distinguish between microscopes or to benchmark existing microscopes to measure the overall quality and efficiency of these instruments. Here, we discuss some simple parameters and methods that can either be used within a multiphoton facility or by a prospective purchaser to benchmark performance. This can both assist in identifying decay in microscope performance and in choosing features of a scope that are suited to experimental needs. PMID:24974026

  19. Assessing and benchmarking multiphoton microscopes for biologists.

    PubMed

    Corbin, Kaitlin; Pinkard, Henry; Peck, Sebastian; Beemiller, Peter; Krummel, Matthew F

    2014-01-01

    Multiphoton microscopy has become staple tool for tracking cells within tissues and organs due to superior depth of penetration, low excitation volumes, and reduced phototoxicity. Many factors, ranging from laser pulse width to relay optics to detectors and electronics, contribute to the overall ability of these microscopes to excite and detect fluorescence deep within tissues. However, we have found that there are few standard ways already described in the literature to distinguish between microscopes or to benchmark existing microscopes to measure the overall quality and efficiency of these instruments. Here, we discuss some simple parameters and methods that can either be used within a multiphoton facility or by a prospective purchaser to benchmark performance. This can both assist in identifying decay in microscope performance and in choosing features of a scope that are suited to experimental needs.

  20. BN-600 full MOX core benchmark analysis.

    SciTech Connect

    Kim, Y. I.; Hill, R. N.; Grimm, K.; Rimpault, G.; Newton, T.; Li, Z. H.; Rineiski, A.; Mohanakrishan, P.; Ishikawa, M.; Lee, K. B.; Danilytchev, A.; Stogov, V.; Nuclear Engineering Division; International Atomic Energy Agency; CEA SERCO Assurance; China Inst. of Atomic Energy; Forschnungszentrum Karlsruhe; Indira Gandhi Centre for Atomic Research; Japan Nuclear Cycle Development Inst.; Korea Atomic Energy Research Inst.; Inst. of Physics and Power Engineering

    2004-01-01

    As a follow-up of the BN-600 hybrid core benchmark, a full MOX core benchmark was performed within the framework of the IAEA co-ordinated research project. Discrepancies between the values of main reactivity coefficients obtained by the participants for the BN-600 full MOX core benchmark appear to be larger than those in the previous hybrid core benchmarks on traditional core configurations. This arises due to uncertainties in the proper modelling of the axial sodium plenum above the core. It was recognized that the sodium density coefficient strongly depends on the core model configuration of interest (hybrid core vs. fully MOX fuelled core with sodium plenum above the core) in conjunction with the calculation method (diffusion vs. transport theory). The effects of the discrepancies revealed between the participants results on the ULOF and UTOP transient behaviours of the BN-600 full MOX core were investigated in simplified transient analyses. Generally the diffusion approximation predicts more benign consequences for the ULOF accident but more hazardous ones for the UTOP accident when compared with the transport theory results. The heterogeneity effect does not have any significant effect on the simulation of the transient. The comparison of the transient analyses results concluded that the fuel Doppler coefficient and the sodium density coefficient are the two most important coefficients in understanding the ULOF transient behaviour. In particular, the uncertainty in evaluating the sodium density coefficient distribution has the largest impact on the description of reactor dynamics. This is because the maximum sodium temperature rise takes place at the top of the core and in the sodium plenum.

  1. Experimental Benchmarking of the Magnetized Friction Force

    SciTech Connect

    Fedotov, A. V.; Litvinenko, V. N.; Galnander, B.; Lofnes, T.; Ziemann, V.; Sidorin, A. O.; Smirnov, A. V.

    2006-03-20

    High-energy electron cooling, presently considered as essential tool for several applications in high-energy and nuclear physics, requires accurate description of the friction force. A series of measurements were performed at CELSIUS with the goal to provide accurate data needed for the benchmarking of theories and simulations. Some results of accurate comparison of experimental data with the friction force formulas are presented.

  2. EXPERIMENTAL BENCHMARKING OF THE MAGNETIZED FRICTION FORCE.

    SciTech Connect

    FEDOTOV, A.V.; GALNANDER, B.; LITVINENKO, V.N.; LOFNES, T.; SIDORIN, A.O.; SMIRNOV, A.V.; ZIEMANN, V.

    2005-09-18

    High-energy electron cooling, presently considered as essential tool for several applications in high-energy and nuclear physics, requires accurate description of the friction force. A series of measurements were performed at CELSIUS with the goal to provide accurate data needed for the benchmarking of theories and simulations. Some results of accurate comparison of experimental data with the friction force formulas are presented.

  3. Aeroelasticity Benchmark Assessment: Subsonic Fixed Wing Program

    NASA Technical Reports Server (NTRS)

    Florance, Jennifer P.; Chwalowski, Pawel; Wieseman, Carol D.

    2010-01-01

    The fundamental technical challenge in computational aeroelasticity is the accurate prediction of unsteady aerodynamic phenomena and the effect on the aeroelastic response of a vehicle. Currently, a benchmarking standard for use in validating the accuracy of computational aeroelasticity codes does not exist. Many aeroelastic data sets have been obtained in wind-tunnel and flight testing throughout the world; however, none have been globally presented or accepted as an ideal data set. There are numerous reasons for this. One reason is that often, such aeroelastic data sets focus on the aeroelastic phenomena alone (flutter, for example) and do not contain associated information such as unsteady pressures and time-correlated structural dynamic deflections. Other available data sets focus solely on the unsteady pressures and do not address the aeroelastic phenomena. Other discrepancies can include omission of relevant data, such as flutter frequency and / or the acquisition of only qualitative deflection data. In addition to these content deficiencies, all of the available data sets present both experimental and computational technical challenges. Experimental issues include facility influences, nonlinearities beyond those being modeled, and data processing. From the computational perspective, technical challenges include modeling geometric complexities, coupling between the flow and the structure, grid issues, and boundary conditions. The Aeroelasticity Benchmark Assessment task seeks to examine the existing potential experimental data sets and ultimately choose the one that is viewed as the most suitable for computational benchmarking. An initial computational evaluation of that configuration will then be performed using the Langley-developed computational fluid dynamics (CFD) software FUN3D1 as part of its code validation process. In addition to the benchmarking activity, this task also includes an examination of future research directions. Researchers within the

  4. Data Intensive Systems (DIS) Benchmark Performance Summary

    DTIC Science & Technology

    2003-08-01

    calculated. These give a rough measure of the texture of each ROI. A gray-level co-occurrence matrix ( GLCM ) contains information about the spatial...sum and difference histograms.19 The descriptors chosen as features for this benchmark are GLCM entropy and GLCM energy, and are defined in terms of...stressmark, the relationships of pairs of pixels within a randomly generated image are measured. These features quantify the texture of the image

  5. Measurement Analysis When Benchmarking Java Card Platforms

    NASA Astrophysics Data System (ADS)

    Paradinas, Pierre; Cordry, Julien; Bouzefrane, Samia

    The advent of the Java Card standard has been a major turning point in smart card technology. With the growing acceptance of this standard, understanding the performance behaviour of these platforms is becoming crucial. To meet this need, we present in this paper, a benchmark framework that enables performance evaluation at the bytecode level. This paper focuses on the validity of our time measurements on smart cards.

  6. Optimal Quantum Control Using Randomized Benchmarking

    NASA Astrophysics Data System (ADS)

    Kelly, J.; Barends, R.; Campbell, B.; Chen, Y.; Chen, Z.; Chiaro, B.; Dunsworth, A.; Fowler, A. G.; Hoi, I.-C.; Jeffrey, E.; Megrant, A.; Mutus, J.; Neill, C.; O'Malley, P. J. J.; Quintana, C.; Roushan, P.; Sank, D.; Vainsencher, A.; Wenner, J.; White, T. C.; Cleland, A. N.; Martinis, John M.

    2014-06-01

    We present a method for optimizing quantum control in experimental systems, using a subset of randomized benchmarking measurements to rapidly infer error. This is demonstrated to improve single- and two-qubit gates, minimize gate bleedthrough, where a gate mechanism can cause errors on subsequent gates, and identify control crosstalk in superconducting qubits. This method is able to correct parameters so that control errors no longer dominate and is suitable for automated and closed-loop optimization of experimental systems.

  7. A Simplified HTTR Diffusion Theory Benchmark

    SciTech Connect

    Rodolfo M. Ferrer; Abderrafi M. Ougouag; Farzad Rahnema

    2010-10-01

    The Georgia Institute of Technology (GA-Tech) recently developed a transport theory benchmark based closely on the geometry and the features of the HTTR reactor that is operational in Japan. Though simplified, the benchmark retains all the principal physical features of the reactor and thus provides a realistic and challenging test for the codes. The purpose of this paper is twofold. The first goal is an extension of the benchmark to diffusion theory applications by generating the additional data not provided in the GA-Tech prior work. The second goal is to use the benchmark on the HEXPEDITE code available to the INL. The HEXPEDITE code is a Green’s function-based neutron diffusion code in 3D hexagonal-z geometry. The results showed that the HEXPEDITE code accurately reproduces the effective multiplication factor of the reference HELIOS solution. A secondary, but no less important, conclusion is that in the testing against actual HTTR data of a full sequence of codes that would include HEXPEDITE, in the apportioning of inevitable discrepancies between experiment and models, the portion of error attributable to HEXPEDITE would be expected to be modest. If large discrepancies are observed, they would have to be explained by errors in the data fed into HEXPEDITE. Results based on a fully realistic model of the HTTR reactor are presented in a companion paper. The suite of codes used in that paper also includes HEXPEDITE. The results shown here should help that effort in the decision making process for refining the modeling steps in the full sequence of codes.

  8. 42 CFR 440.385 - Delivery of benchmark and benchmark-equivalent coverage through managed care entities.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Delivery of benchmark and benchmark-equivalent coverage through managed care entities. 440.385 Section 440.385 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL ASSISTANCE PROGRAMS SERVICES: GENERAL PROVISIONS Benchmark Benefit...

  9. Introduction to the HPC Challenge Benchmark Suite

    SciTech Connect

    Luszczek, Piotr; Dongarra, Jack J.; Koester, David; Rabenseifner,Rolf; Lucas, Bob; Kepner, Jeremy; McCalpin, John; Bailey, David; Takahashi, Daisuke

    2005-04-25

    The HPC Challenge benchmark suite has been released by the DARPA HPCS program to help define the performance boundaries of future Petascale computing systems. HPC Challenge is a suite of tests that examine the performance of HPC architectures using kernels with memory access patterns more challenging than those of the High Performance Linpack (HPL) benchmark used in the Top500 list. Thus, the suite is designed to augment the Top500 list, providing benchmarks that bound the performance of many real applications as a function of memory access characteristics e.g., spatial and temporal locality, and providing a framework for including additional tests. In particular, the suite is composed of several well known computational kernels (STREAM, HPL, matrix multiply--DGEMM, parallel matrix transpose--PTRANS, FFT, RandomAccess, and bandwidth/latency tests--b{sub eff}) that attempt to span high and low spatial and temporal locality space. By design, the HPC Challenge tests are scalable with the size of data sets being a function of the largest HPL matrix for the tested system.

  10. Equilibrium Partitioning Sediment Benchmarks (ESBs) for the ...

    EPA Pesticide Factsheets

    This document describes procedures to determine the concentrations of nonionic organic chemicals in sediment interstitial waters. In previous ESB documents, the general equilibrium partitioning (EqP) approach was chosen for the derivation of sediment benchmarks because it accounts for the varying bioavailability of chemicals in different sediments and allows for the incorporation of the appropriate biological effects concentration. This provides for the derivation of benchmarks that are causally linked to the specific chemical, applicable across sediments, and appropriately protective of benthic organisms.  This equilibrium partitioning sediment benchmark (ESB) document was prepared by scientists from the Atlantic Ecology Division, Mid-Continent Ecology Division, and Western Ecology Division, the Office of Water, and private consultants. The document describes procedures to determine the interstitial water concentrations of nonionic organic chemicals in contaminated sediments. Based on these concentrations, guidance is provided on the derivation of toxic units to assess whether the sediments are likely to cause adverse effects to benthic organisms. The equilibrium partitioning (EqP) approach was chosen because it is based on the concentrations of chemical(s) that are known to be harmful and bioavailable in the environment.  This document, and five others published over the last nine years, will be useful for the Program Offices, including Superfund, a

  11. Benchmarking the Multidimensional Stellar Implicit Code MUSIC

    NASA Astrophysics Data System (ADS)

    Goffrey, T.; Pratt, J.; Viallet, M.; Baraffe, I.; Popov, M. V.; Walder, R.; Folini, D.; Geroux, C.; Constantino, T.

    2017-03-01

    We present the results of a numerical benchmark study for the MUltidimensional Stellar Implicit Code (MUSIC) based on widely applicable two- and three-dimensional compressible hydrodynamics problems relevant to stellar interiors. MUSIC is an implicit large eddy simulation code that uses implicit time integration, implemented as a Jacobian-free Newton Krylov method. A physics based preconditioning technique which can be adjusted to target varying physics is used to improve the performance of the solver. The problems used for this benchmark study include the Rayleigh-Taylor and Kelvin-Helmholtz instabilities, and the decay of the Taylor-Green vortex. Additionally we show a test of hydrostatic equilibrium, in a stellar environment which is dominated by radiative effects. In this setting the flexibility of the preconditioning technique is demonstrated. This work aims to bridge the gap between the hydrodynamic test problems typically used during development of numerical methods and the complex flows of stellar interiors. A series of multidimensional tests were performed and analysed. Each of these test cases was analysed with a simple, scalar diagnostic, with the aim of enabling direct code comparisons. As the tests performed do not have analytic solutions, we verify MUSIC by comparing it to established codes including ATHENA and the PENCIL code. MUSIC is able to both reproduce behaviour from established and widely-used codes as well as results expected from theoretical predictions. This benchmarking study concludes a series of papers describing the development of the MUSIC code and provides confidence in future applications.

  12. A PWR Thorium Pin Cell Burnup Benchmark

    SciTech Connect

    Weaver, Kevan Dean; Zhao, X.; Pilat, E. E; Hejzlar, P.

    2000-05-01

    As part of work to evaluate the potential benefits of using thorium in LWR fuel, a thorium fueled benchmark comparison was made in this study between state-of-the-art codes, MOCUP (MCNP4B + ORIGEN2), and CASMO-4 for burnup calculations. The MOCUP runs were done individually at MIT and INEEL, using the same model but with some differences in techniques and cross section libraries. Eigenvalue and isotope concentrations were compared on a PWR pin cell model up to high burnup. The eigenvalue comparison as a function of burnup is good: the maximum difference is within 2% and the average absolute difference less than 1%. The isotope concentration comparisons are better than a set of MOX fuel benchmarks and comparable to a set of uranium fuel benchmarks reported in the literature. The actinide and fission product data sources used in the MOCUP burnup calculations for a typical thorium fuel are documented. Reasons for code vs code differences are analyzed and discussed.

  13. SPICE benchmark for global tomographic methods

    NASA Astrophysics Data System (ADS)

    Qin, Yilong; Capdeville, Yann; Maupin, Valerie; Montagner, Jean-Paul; Lebedev, Sergei; Beucler, Eric

    2008-11-01

    The existing global tomographic methods result in different models due to different parametrization, scale resolution and theoretical approach. To test how current imaging techniques are limited by approximations in theory and by the inadequacy of data quality and coverage, it is necessary to perform a global-scale benchmark to understand the resolving properties of each specific imaging algorithm. In the framework of the Seismic wave Propagation and Imaging in Complex media: a European network (SPICE) project, it was decided to perform a benchmark experiment of global inversion algorithms. First, a preliminary benchmark with a simple isotropic model is carried out to check the feasibility in terms of acquisition geometry and numerical accuracy. Then, to fully validate tomographic schemes with a challenging synthetic data set, we constructed one complex anisotropic global model, which is characterized by 21 elastic constants and includes 3-D heterogeneities in velocity, anisotropy (radial and azimuthal anisotropy), attenuation, density, as well as surface topography and bathymetry. The intermediate-period (>32 s), high fidelity anisotropic modelling was performed by using state-of-the-art anisotropic anelastic modelling code, that is, coupled spectral element method (CSEM), on modern massively parallel computing resources. The benchmark data set consists of 29 events and three-component seismograms are recorded by 256 stations. Because of the limitation of the available computing power, synthetic seismograms have a minimum period of 32 s and a length of 10 500 s. The inversion of the benchmark data set demonstrates several well-known problems of classical surface wave tomography, such as the importance of crustal correction to recover the shallow structures, the loss of resolution with depth, the smearing effect, both horizontal and vertical, the inaccuracy of amplitude of isotropic S-wave velocity variation, the difficulty of retrieving the magnitude of azimuthal

  14. Protein-Protein Docking Benchmark Version 3.0

    PubMed Central

    Hwang, Howook; Pierce, Brian; Mintseris, Julian; Janin, Joël; Weng, Zhiping

    2009-01-01

    We present version 3.0 of our publicly available protein-protein docking benchmark. This update includes 40 new test cases, representing a 48% increase from Benchmark 2.0. For all of the new cases, the crystal structures of both binding partners are available. As with Benchmark 2.0, SCOP1 (Structural Classification of Proteins) was used to remove redundant test cases. The 124 unbound-unbound test cases in Benchmark 3.0 are classified into 88 rigid-body cases, 19 medium difficulty cases, and 17 difficult cases, based on the degree of conformational change at the interface upon complex formation. In addition to providing the community with more test cases for evaluating docking methods, the expansion of Benchmark 3.0 will facilitate the development of new algorithms that require a large number of training examples. Benchmark 3.0 is available to the public at http://zlab.bu.edu/benchmark. PMID:18491384

  15. Relationship between the TCAP and the Pearson Benchmark Assessment in Elementary Students' Reading and Math Performance in a Northeastern Tennessee School District

    ERIC Educational Resources Information Center

    Dugger-Roberts, Cherith A.

    2014-01-01

    The purpose of this quantitative study was to determine if there was a relationship between the TCAP test and Pearson Benchmark assessment in elementary students' reading and language arts and math performance in a northeastern Tennessee school district. This study involved 3rd, 4th, 5th, and 6th grade students. The study focused on the following…

  16. Benchmarking Global Food Safety Performances: The Era of Risk Intelligence.

    PubMed

    Valleé, Jean-Charles Le; Charlebois, Sylvain

    2015-10-01

    Food safety data segmentation and limitations hamper the world's ability to select, build up, monitor, and evaluate food safety performance. Currently, there is no metric that captures the entire food safety system, and performance data are not collected strategically on a global scale. Therefore, food safety benchmarking is essential not only to help monitor ongoing performance but also to inform continued food safety system design, adoption, and implementation toward more efficient and effective food safety preparedness, responsiveness, and accountability. This comparative study identifies and evaluates common elements among global food safety systems. It provides an overall world ranking of food safety performance for 17 Organisation for Economic Co-Operation and Development (OECD) countries, illustrated by 10 indicators organized across three food safety risk governance domains: risk assessment (chemical risks, microbial risks, and national reporting on food consumption), risk management (national food safety capacities, food recalls, food traceability, and radionuclides standards), and risk communication (allergenic risks, labeling, and public trust). Results show all countries have very high food safety standards, but Canada and Ireland, followed by France, earned excellent grades relative to their peers. However, any subsequent global ranking study should consider the development of survey instruments to gather adequate and comparable national evidence on food safety.

  17. National Performance Benchmarks for Modern Screening Digital Mammography: Update from the Breast Cancer Surveillance Consortium.

    PubMed

    Lehman, Constance D; Arao, Robert F; Sprague, Brian L; Lee, Janie M; Buist, Diana S M; Kerlikowske, Karla; Henderson, Louise M; Onega, Tracy; Tosteson, Anna N A; Rauscher, Garth H; Miglioretti, Diana L

    2016-12-05

    Purpose To establish performance benchmarks for modern screening digital mammography and assess performance trends over time in U.S. community practice. Materials and Methods This HIPAA-compliant, institutional review board-approved study measured the performance of digital screening mammography interpreted by 359 radiologists across 95 facilities in six Breast Cancer Surveillance Consortium (BCSC) registries. The study included 1 682 504 digital screening mammograms performed between 2007 and 2013 in 792 808 women. Performance measures were calculated according to the American College of Radiology Breast Imaging Reporting and Data System, 5th edition, and were compared with published benchmarks by the BCSC, the National Mammography Database, and performance recommendations by expert opinion. Benchmarks were derived from the distribution of performance metrics across radiologists and were presented as 50th (median), 10th, 25th, 75th, and 90th percentiles, with graphic presentations using smoothed curves. Results Mean screening performance measures were as follows: abnormal interpretation rate (AIR), 11.6 (95% confidence interval [CI]: 11.5, 11.6); cancers detected per 1000 screens, or cancer detection rate (CDR), 5.1 (95% CI: 5.0, 5.2); sensitivity, 86.9% (95% CI: 86.3%, 87.6%); specificity, 88.9% (95% CI: 88.8%, 88.9%); false-negative rate per 1000 screens, 0.8 (95% CI: 0.7, 0.8); positive predictive value (PPV) 1, 4.4% (95% CI: 4.3%, 4.5%); PPV2, 25.6% (95% CI: 25.1%, 26.1%); PPV3, 28.6% (95% CI: 28.0%, 29.3%); cancers stage 0 or 1, 76.9%; minimal cancers, 57.7%; and node-negative invasive cancers, 79.4%. Recommended CDRs were achieved by 92.1% of radiologists in community practice, and 97.1% achieved recommended ranges for sensitivity. Only 59.0% of radiologists achieved recommended AIRs, and only 63.0% achieved recommended levels of specificity. Conclusion The majority of radiologists in the BCSC surpass cancer detection recommendations for screening

  18. Algebra for All: California’s Eighth-Grade Algebra Initiative as Constrained Curricula

    PubMed Central

    Domina, Thurston; Penner, Andrew M.; Penner, Emily K.; Conley, Annemarie

    2015-01-01

    Background/Context Across the United States, secondary school curricula are intensifying as a growing proportion of students enroll in high-level academic math courses. In many districts, this intensification process occurs as early as eighth grade, where schools are effectively constraining their mathematics curricula by restricting course offerings and placing more students into Algebra I. This paper provides a quantitative single-case research study of policy-driven curricular intensification in one California school district. Research Questions (1a) What effect did 8th eighth grade curricular intensification have on mathematics course enrollment patterns in Towering Pines Unified schools? (2b) How did the distribution of prior achievement in Towering Pines math classrooms change as the district constrained the curriculum by universalizing 8th eighth grade Algebra? (3c) Did 8th eighth grade curricular intensification improve students’ mathematics achievement? Setting Towering Pines is an immigrant enclave in the inner-ring suburbs of a major metropolitan area. The district’s 10 middle schools together enroll approximately 4,000 eighth graders each year. The districts’ students are ethnically diverse and largely economically disadvantaged. The study draws upon administrative data describing 8th eighth graders in the district in the 2004–20-05 through 2007–20-08 school years. Intervention/Program/Practice During the study period, Towering Pines dramatically intensified middle school students’ math curricula: In the 2004–20-05 school year 32% of the district’s 8th eighth graders enrolled in Algebra or a higher- level mathematics course; by the 2007–20-08 school year that proportion had increased to 84%. Research Design We use an interrupted time-series design, comparing students’ 8th eighth grade math course enrollments, 10th grade math course enrollments, and 10th grade math test scores across the four cohorts, controlling for demographics and

  19. Grading for Understanding--Standards-Based Grading

    ERIC Educational Resources Information Center

    Zimmerman, Todd

    2017-01-01

    Standards-based grading (SBG), sometimes called learning objectives-based assessment (LOBA), is an assessment model that relies on students demonstrating mastery of learning objectives (sometimes referred to as standards). The goal of this grading system is to focus students on mastering learning objectives rather than on accumulating points. I…

  20. A Uranium Bioremediation Reactive Transport Benchmark

    SciTech Connect

    Yabusaki, Steven B.; Sengor, Sevinc; Fang, Yilin

    2015-06-01

    A reactive transport benchmark problem set has been developed based on in situ uranium bio-immobilization experiments that have been performed at a former uranium mill tailings site in Rifle, Colorado, USA. Acetate-amended groundwater stimulates indigenous microorganisms to catalyze the reduction of U(VI) to a sparingly soluble U(IV) mineral. The interplay between the flow, acetate loading periods and rates, microbially-mediated and geochemical reactions leads to dynamic behavior in metal- and sulfate-reducing bacteria, pH, alkalinity, and reactive mineral surfaces. The benchmark is based on an 8.5 m long one-dimensional model domain with constant saturated flow and uniform porosity. The 159-day simulation introduces acetate and bromide through the upgradient boundary in 14-day and 85-day pulses separated by a 10 day interruption. Acetate loading is tripled during the second pulse, which is followed by a 50 day recovery period. Terminal electron accepting processes for goethite, phyllosilicate Fe(III), U(VI), and sulfate are modeled using Monod-type rate laws. Major ion geochemistry modeled includes mineral reactions, as well as aqueous and surface complexation reactions for UO2++, Fe++, and H+. In addition to the dynamics imparted by the transport of the acetate pulses, U(VI) behavior involves the interplay between bioreduction, which is dependent on acetate availability, and speciation-controlled surface complexation, which is dependent on pH, alkalinity and available surface complexation sites. The general difficulty of this benchmark is the large number of reactions (74), multiple rate law formulations, a multisite uranium surface complexation model, and the strong interdependency and sensitivity of the reaction processes. Results are presented for three simulators: HYDROGEOCHEM, PHT3D, and PHREEQC.

  1. Towards Systematic Benchmarking of Climate Model Performance

    NASA Astrophysics Data System (ADS)

    Gleckler, P. J.

    2014-12-01

    The process by which climate models are evaluated has evolved substantially over the past decade, with the Coupled Model Intercomparison Project (CMIP) serving as a centralizing activity for coordinating model experimentation and enabling research. Scientists with a broad spectrum of expertise have contributed to the CMIP model evaluation process, resulting in many hundreds of publications that have served as a key resource for the IPCC process. For several reasons, efforts are now underway to further systematize some aspects of the model evaluation process. First, some model evaluation can now be considered routine and should not require "re-inventing the wheel" or a journal publication simply to update results with newer models. Second, the benefit of CMIP research to model development has not been optimal because the publication of results generally takes several years and is usually not reproducible for benchmarking newer model versions. And third, there are now hundreds of model versions and many thousands of simulations, but there is no community-based mechanism for routinely monitoring model performance changes. An important change in the design of CMIP6 can help address these limitations. CMIP6 will include a small set standardized experiments as an ongoing exercise (CMIP "DECK": ongoing Diagnostic, Evaluation and Characterization of Klima), so that modeling groups can submit them at any time and not be overly constrained by deadlines. In this presentation, efforts to establish routine benchmarking of existing and future CMIP simulations will be described. To date, some benchmarking tools have been made available to all CMIP modeling groups to enable them to readily compare with CMIP5 simulations during the model development process. A natural extension of this effort is to make results from all CMIP simulations widely available, including the results from newer models as soon as the simulations become available for research. Making the results from routine

  2. NAS Parallel Benchmarks. 2.4

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    We describe a new problem size, called Class D, for the NAS Parallel Benchmarks (NPB), whose MPI source code implementation is being released as NPB 2.4. A brief rationale is given for how the new class is derived. We also describe the modifications made to the MPI (Message Passing Interface) implementation to allow the new class to be run on systems with 32-bit integers, and with moderate amounts of memory. Finally, we give the verification values for the new problem size.

  3. Benchmarks of Global Clean Energy Manufacturing

    SciTech Connect

    Sandor, Debra; Chung, Donald; Keyser, David; Mann, Margaret; Engel-Cox, Jill

    2017-01-01

    The Clean Energy Manufacturing Analysis Center (CEMAC), sponsored by the U.S. Department of Energy (DOE) Office of Energy Efficiency and Renewable Energy (EERE), provides objective analysis and up-to-date data on global supply chains and manufacturing of clean energy technologies. Benchmarks of Global Clean Energy Manufacturing sheds light on several fundamental questions about the global clean technology manufacturing enterprise: How does clean energy technology manufacturing impact national economies? What are the economic opportunities across the manufacturing supply chain? What are the global dynamics of clean energy technology manufacturing?

  4. Benchmarking boiler tube failures - Part 1

    SciTech Connect

    Patrick, J.; Oldani, R.; von Behren, D.

    2005-10-01

    Boiler tube failures continue to be the leading cause of downtime for steam power plants. That should not be a surprise; a typical steam generator has miles of tubes that operate at high temperatures and pressures. Are your experiences comparable to those of your peers? Could you learn something from tube-leak benchmarking data that could improve the operation of your plant? The Electric Utility Cost Group (EUCG) recently completed a boiler-tube failure study that is available only to its members. But Power magazine has been given exclusive access to some of the results, published in this article. 4 figs.

  5. Benchmarking East Tennessee`s economic capacity

    SciTech Connect

    1995-04-20

    This presentation is comprised of viewgraphs delineating major economic factors operating in 15 counties in East Tennessee. The purpose of the information presented is to provide a benchmark analysis of economic conditions for use in guiding economic growth in the region. The emphasis of the presentation is economic infrastructure, which is classified into six categories: human resources, technology, financial resources, physical infrastructure, quality of life, and tax and regulation. Data for analysis of key indicators in each of the categories are presented. Preliminary analyses, in the form of strengths and weaknesses and comparison to reference groups, are given.

  6. Guidebook for Using the Tool BEST Cement: Benchmarking and Energy Savings Tool for the Cement Industry

    SciTech Connect

    Galitsky, Christina; Price, Lynn; Zhou, Nan; Fuqiu , Zhou; Huawen, Xiong; Xuemin, Zeng; Lan, Wang

    2008-07-30

    ) the amount of production of cement by type and grade (in tonnes per year); (6) the electricity generated onsite; and, (7) the energy used by fuel type; and, the amount (in RMB per year) spent on energy. The tool offers the user the opportunity to do a quick assessment or a more detailed assessment--this choice will determine the level of detail of the energy input. The detailed assessment will require energy data for each stage of production while the quick assessment will require only total energy used at the entire facility (see Section 6 for more details on quick versus detailed assessments). The benchmarking tool provides two benchmarks--one for Chinese best practices and one for international best practices. Section 2 describes the differences between these two and how each benchmark was calculated. The tool also asks for a target input by the user for the user to set goals for the facility.

  7. General Graded Response Model.

    ERIC Educational Resources Information Center

    Samejima, Fumiko

    This paper describes the graded response model. The graded response model represents a family of mathematical models that deal with ordered polytomous categories, such as: (1) letter grading; (2) an attitude survey with "strongly disagree, disagree, agree, and strongly agree" choices; (3) partial credit given in accord with an…

  8. Conversations about Grading

    ERIC Educational Resources Information Center

    Gullen, Kristine; Gullen, James; Erickson-Guy, Nickolas

    2012-01-01

    Grades often are determined by the unspoken values and beliefs of an autonomous teacher, but technology is making grading practices more transparent to parents, students, and educators. The ability to view the grade books of teachers who are teaching the same course in the same district is increasingly raising questions and challenges to what were…

  9. [Grading of prostate cancer].

    PubMed

    Kristiansen, G; Roth, W; Helpap, B

    2016-07-01

    The current grading of prostate cancer is based on the classification system of the International Society of Urological Pathology (ISUP) following a consensus conference in Chicago in 2014. The foundations are based on the frequently modified grading system of Gleason. This article presents a brief description of the development to the current ISUP grading system.

  10. The Meaning of Grades.

    ERIC Educational Resources Information Center

    Teixeira, Serna E.

    1996-01-01

    Asserts that students see grades as an indicator of effort unconnected to the content of the course while teachers regard grades as a measure of achievement within a discipline. Discusses some of the current controversies and approaches concerning grades and how they relate to school reform. (MJP)

  11. Learning Through Benchmarking: Developing a Relational, Prospective Approach to Benchmarking ICT in Learning and Teaching

    ERIC Educational Resources Information Center

    Ellis, Robert A.; Moore, Roger R.

    2006-01-01

    This study discusses benchmarking the use of information and communication technologies (ICT) in teaching and learning between two universities with different missions: one an Australian campus-based metropolitan university and the other a British distance-education provider. It argues that the differences notwithstanding, it is possible to…

  12. Development of a California commercial building benchmarking database

    SciTech Connect

    Kinney, Satkartar; Piette, Mary Ann

    2002-05-17

    Building energy benchmarking is a useful starting point for commercial building owners and operators to target energy savings opportunities. There are a number of tools and methods for benchmarking energy use. Benchmarking based on regional data can provides more relevant information for California buildings than national tools such as Energy Star. This paper discusses issues related to benchmarking commercial building energy use and the development of Cal-Arch, a building energy benchmarking database for California. Currently Cal-Arch uses existing survey data from California's Commercial End Use Survey (CEUS), a largely underutilized wealth of information collected by California's major utilities. Doe's Commercial Building Energy Consumption Survey (CBECS) is used by a similar tool, Arch, and by a number of other benchmarking tools. Future versions of Arch/Cal-Arch will utilize additional data sources including modeled data and individual buildings to expand the database.

  13. MARS code developments, benchmarking and applications

    SciTech Connect

    Mokhov, N.V.

    2000-03-24

    Recent developments of the MARS Monte Carlo code system for simulation of hadronic and electromagnetic cascades in shielding, accelerator and detector components in the energy range from a fraction of an electron volt up to 100 TeV are described. The physical model of hadron and lepton interactions with nuclei and atoms has undergone substantial improvements. These include a new nuclear cross section library, a model for soft prior production, a cascade-exciton model, a dual parton model, deuteron-nucleus and neutrino-nucleus interaction models, a detailed description of negative hadron and muon absorption, and a unified treatment of muon and charged hadron electro-magnetic interactions with matter. New algorithms have been implemented into the code and benchmarked against experimental data. A new Graphical-User Interface has been developed. The code capabilities to simulate cascades and generate a variety of results in complex systems have been enhanced. The MARS system includes links to the MCNP code for neutron and photon transport below 20 MeV, to the ANSYS code for thermal and stress analyses and to the STRUCT code for multi-turn particle tracking in large synchrotrons and collider rings. Results of recent benchmarking of the MARS code are presented. Examples of non-trivial code applications are given for the Fermilab Booster and Main Injector, for a 1.5 MW target station and a muon storage ring.

  14. Parallel Ada benchmarks for the SVMS

    NASA Technical Reports Server (NTRS)

    Collard, Philippe E.

    1990-01-01

    The use of parallel processing paradigm to design and develop faster and more reliable computers appear to clearly mark the future of information processing. NASA started the development of such an architecture: the Spaceborne VHSIC Multi-processor System (SVMS). Ada will be one of the languages used to program the SVMS. One of the unique characteristics of Ada is that it supports parallel processing at the language level through the tasking constructs. It is important for the SVMS project team to assess how efficiently the SVMS architecture will be implemented, as well as how efficiently Ada environment will be ported to the SVMS. AUTOCLASS II, a Bayesian classifier written in Common Lisp, was selected as one of the benchmarks for SVMS configurations. The purpose of the R and D effort was to provide the SVMS project team with the version of AUTOCLASS II, written in Ada, that would make use of Ada tasking constructs as much as possible so as to constitute a suitable benchmark. Additionally, a set of programs was developed that would measure Ada tasking efficiency on parallel architectures as well as determine the critical parameters influencing tasking efficiency. All this was designed to provide the SVMS project team with a set of suitable tools in the development of the SVMS architecture.

  15. Benchmarking database performance for genomic data.

    PubMed

    Khushi, Matloob

    2015-06-01

    Genomic regions represent features such as gene annotations, transcription factor binding sites and epigenetic modifications. Performing various genomic operations such as identifying overlapping/non-overlapping regions or nearest gene annotations are common research needs. The data can be saved in a database system for easy management, however, there is no comprehensive database built-in algorithm at present to identify overlapping regions. Therefore I have developed a novel region-mapping (RegMap) SQL-based algorithm to perform genomic operations and have benchmarked the performance of different databases. Benchmarking identified that PostgreSQL extracts overlapping regions much faster than MySQL. Insertion and data uploads in PostgreSQL were also better, although general searching capability of both databases was almost equivalent. In addition, using the algorithm pair-wise, overlaps of >1000 datasets of transcription factor binding sites and histone marks, collected from previous publications, were reported and it was found that HNF4G significantly co-locates with cohesin subunit STAG1 (SA1).Inc.

  16. Simple mathematical law benchmarks human confrontations

    NASA Astrophysics Data System (ADS)

    Johnson, Neil F.; Medina, Pablo; Zhao, Guannan; Messinger, Daniel S.; Horgan, John; Gill, Paul; Bohorquez, Juan Camilo; Mattson, Whitney; Gangi, Devon; Qi, Hong; Manrique, Pedro; Velasquez, Nicolas; Morgenstern, Ana; Restrepo, Elvira; Johnson, Nicholas; Spagat, Michael; Zarama, Roberto

    2013-12-01

    Many high-profile societal problems involve an individual or group repeatedly attacking another - from child-parent disputes, sexual violence against women, civil unrest, violent conflicts and acts of terror, to current cyber-attacks on national infrastructure and ultrafast cyber-trades attacking stockholders. There is an urgent need to quantify the likely severity and timing of such future acts, shed light on likely perpetrators, and identify intervention strategies. Here we present a combined analysis of multiple datasets across all these domains which account for >100,000 events, and show that a simple mathematical law can benchmark them all. We derive this benchmark and interpret it, using a minimal mechanistic model grounded by state-of-the-art fieldwork. Our findings provide quantitative predictions concerning future attacks; a tool to help detect common perpetrators and abnormal behaviors; insight into the trajectory of a `lone wolf' identification of a critical threshold for spreading a message or idea among perpetrators; an intervention strategy to erode the most lethal clusters; and more broadly, a quantitative starting point for cross-disciplinary theorizing about human aggression at the individual and group level, in both real and online worlds.

  17. Non-Markovianity in Randomized Benchmarking

    NASA Astrophysics Data System (ADS)

    Ball, Harrison; Stace, Tom M.; Biercuk, Michael J.

    2015-03-01

    Randomized benchmarking is routinely employed to recover information about the fidelity of a quantum operation by exploiting probabilistic twirling errors over an implementation of the Clifford group. Standard assumptions of Markovianity in the underlying noise environment, however, remain at odds with realistic, correlated noise encountered in real systems. We model single-qubit randomized benchmarking experiments as a sequence of ideal Clifford operations interleaved with stochastic dephasing errors, implemented as unitary rotations about σz. Successive error rotations map to a sequence of random variables whose correlations introduce non-Markovian effects emulating realistic colored-noise environments. The Markovian limit is recovered by turning off all correlations, reducing each error to an independent Gaussian-distributed random variable. We examine the dependence of the statistical distribution of fidelity outcomes on these noise correlations, deriving analytic expressions for probability density functions and related statistics for relevant fidelity metrics. This enables us to characterize and bear out the distinction between the Markovian and non-Markovian cases, with implications for interpretation and handling of experimental data.

  18. Simple mathematical law benchmarks human confrontations.

    PubMed

    Johnson, Neil F; Medina, Pablo; Zhao, Guannan; Messinger, Daniel S; Horgan, John; Gill, Paul; Bohorquez, Juan Camilo; Mattson, Whitney; Gangi, Devon; Qi, Hong; Manrique, Pedro; Velasquez, Nicolas; Morgenstern, Ana; Restrepo, Elvira; Johnson, Nicholas; Spagat, Michael; Zarama, Roberto

    2013-12-10

    Many high-profile societal problems involve an individual or group repeatedly attacking another - from child-parent disputes, sexual violence against women, civil unrest, violent conflicts and acts of terror, to current cyber-attacks on national infrastructure and ultrafast cyber-trades attacking stockholders. There is an urgent need to quantify the likely severity and timing of such future acts, shed light on likely perpetrators, and identify intervention strategies. Here we present a combined analysis of multiple datasets across all these domains which account for >100,000 events, and show that a simple mathematical law can benchmark them all. We derive this benchmark and interpret it, using a minimal mechanistic model grounded by state-of-the-art fieldwork. Our findings provide quantitative predictions concerning future attacks; a tool to help detect common perpetrators and abnormal behaviors; insight into the trajectory of a 'lone wolf'; identification of a critical threshold for spreading a message or idea among perpetrators; an intervention strategy to erode the most lethal clusters; and more broadly, a quantitative starting point for cross-disciplinary theorizing about human aggression at the individual and group level, in both real and online worlds.

  19. Transparency benchmarking on audio watermarks and steganography

    NASA Astrophysics Data System (ADS)

    Kraetzer, Christian; Dittmann, Jana; Lang, Andreas

    2006-02-01

    The evaluation of transparency plays an important role in the context of watermarking and steganography algorithms. This paper introduces a general definition of the term transparency in the context of steganography, digital watermarking and attack based evaluation of digital watermarking algorithms. For this purpose the term transparency is first considered individually for each of the three application fields (steganography, digital watermarking and watermarking algorithm evaluation). From the three results a general definition for the overall context is derived in a second step. The relevance and applicability of the definition given is evaluated in practise using existing audio watermarking and steganography algorithms (which work in time, frequency and wavelet domain) as well as an attack based evaluation suite for audio watermarking benchmarking - StirMark for Audio (SMBA). For this purpose selected attacks from the SMBA suite are modified by adding transparency enhancing measures using a psychoacoustic model. The transparency and robustness of the evaluated audio watermarking algorithms by using the original and modifid attacks are compared. The results of this paper show hat transparency benchmarking will lead to new information regarding the algorithms under observation and their usage. This information can result in concrete recommendations for modification, like the ones resulting from the tests performed here.

  20. Computational Thermochemistry and Benchmarking of Reliable Methods

    SciTech Connect

    Feller, David F.; Dixon, David A.; Dunning, Thom H.; Dupuis, Michel; McClemore, Doug; Peterson, Kirk A.; Xantheas, Sotiris S.; Bernholdt, David E.; Windus, Theresa L.; Chalasinski, Grzegorz; Fosada, Rubicelia; Olguim, Jorge; Dobbs, Kerwin D.; Frurip, Donald; Stevens, Walter J.; Rondan, Nelson; Chase, Jared M.; Nichols, Jeffrey A.

    2006-06-20

    During the first and second years of the Computational Thermochemistry and Benchmarking of Reliable Methods project, we completed several studies using the parallel computing capabilities of the NWChem software and Molecular Science Computing Facility (MSCF), including large-scale density functional theory (DFT), second-order Moeller-Plesset (MP2) perturbation theory, and CCSD(T) calculations. During the third year, we continued to pursue the computational thermodynamic and benchmarking studies outlined in our proposal. With the issues affecting the robustness of the coupled cluster part of NWChem resolved, we pursued studies of the heats-of-formation of compounds containing 5 to 7 first- and/or second-row elements and approximately 10 to 14 hydrogens. The size of these systems, when combined with the large basis sets (cc-pVQZ and aug-cc-pVQZ) that are necessary for extrapolating to the complete basis set limit, creates a formidable computational challenge, for which NWChem on NWMPP1 is well suited.

  1. EVA Health and Human Performance Benchmarking Study

    NASA Technical Reports Server (NTRS)

    Abercromby, A. F.; Norcross, J.; Jarvis, S. L.

    2016-01-01

    Multiple HRP Risks and Gaps require detailed characterization of human health and performance during exploration extravehicular activity (EVA) tasks; however, a rigorous and comprehensive methodology for characterizing and comparing the health and human performance implications of current and future EVA spacesuit designs does not exist. This study will identify and implement functional tasks and metrics, both objective and subjective, that are relevant to health and human performance, such as metabolic expenditure, suit fit, discomfort, suited postural stability, cognitive performance, and potentially biochemical responses for humans working inside different EVA suits doing functional tasks under the appropriate simulated reduced gravity environments. This study will provide health and human performance benchmark data for humans working in current EVA suits (EMU, Mark III, and Z2) as well as shirtsleeves using a standard set of tasks and metrics with quantified reliability. Results and methodologies developed during this test will provide benchmark data against which future EVA suits, and different suit configurations (eg, varied pressure, mass, CG) may be reliably compared in subsequent tests. Results will also inform fitness for duty standards as well as design requirements and operations concepts for future EVA suits and other exploration systems.

  2. Simple mathematical law benchmarks human confrontations

    PubMed Central

    Johnson, Neil F.; Medina, Pablo; Zhao, Guannan; Messinger, Daniel S.; Horgan, John; Gill, Paul; Bohorquez, Juan Camilo; Mattson, Whitney; Gangi, Devon; Qi, Hong; Manrique, Pedro; Velasquez, Nicolas; Morgenstern, Ana; Restrepo, Elvira; Johnson, Nicholas; Spagat, Michael; Zarama, Roberto

    2013-01-01

    Many high-profile societal problems involve an individual or group repeatedly attacking another – from child-parent disputes, sexual violence against women, civil unrest, violent conflicts and acts of terror, to current cyber-attacks on national infrastructure and ultrafast cyber-trades attacking stockholders. There is an urgent need to quantify the likely severity and timing of such future acts, shed light on likely perpetrators, and identify intervention strategies. Here we present a combined analysis of multiple datasets across all these domains which account for >100,000 events, and show that a simple mathematical law can benchmark them all. We derive this benchmark and interpret it, using a minimal mechanistic model grounded by state-of-the-art fieldwork. Our findings provide quantitative predictions concerning future attacks; a tool to help detect common perpetrators and abnormal behaviors; insight into the trajectory of a ‘lone wolf'; identification of a critical threshold for spreading a message or idea among perpetrators; an intervention strategy to erode the most lethal clusters; and more broadly, a quantitative starting point for cross-disciplinary theorizing about human aggression at the individual and group level, in both real and online worlds. PMID:24322528

  3. Improving Mass Balance Modeling of Benchmark Glaciers

    NASA Astrophysics Data System (ADS)

    van Beusekom, A. E.; March, R. S.; O'Neel, S.

    2009-12-01

    The USGS monitors long-term glacier mass balance at three benchmark glaciers in different climate regimes. The coastal and continental glaciers are represented by Wolverine and Gulkana Glaciers in Alaska, respectively. Field measurements began in 1966 and continue. We have reanalyzed the published balance time series with more modern methods and recomputed reference surface and conventional balances. Addition of the most recent data shows a continuing trend of mass loss. We compare the updated balances to the previously accepted balances and discuss differences. Not all balance quantities can be determined from the field measurements. For surface processes, we model missing information with an improved degree-day model. Degree-day models predict ablation from the sum of daily mean temperatures and an empirical degree-day factor. We modernize the traditional degree-day model as well as derive new degree-day factors in an effort to closer match the balance time series and thus better predict the future state of the benchmark glaciers. For subsurface processes, we model the refreezing of meltwater for internal accumulation. We examine the sensitivity of the balance time series to the subsurface process of internal accumulation, with the goal of determining the best way to include internal accumulation into balance estimates.

  4. Multisensor benchmark data for riot control

    NASA Astrophysics Data System (ADS)

    Jäger, Uwe; Höpken, Marc; Dürr, Bernhard; Metzler, Jürgen; Willersinn, Dieter

    2008-10-01

    Quick and precise response is essential for riot squads when coping with escalating violence in crowds. Often it is just a single person, known as the leader of the gang, who instigates other people and thus is responsible of excesses. Putting this single person out of action in most cases leads to a de-escalating situation. Fostering de-escalations is one of the main tasks of crowd and riot control. To do so, extensive situation awareness is mandatory for the squads and can be promoted by technical means such as video surveillance using sensor networks. To develop software tools for situation awareness appropriate input data with well-known quality is needed. Furthermore, the developer must be able to measure algorithm performance and ongoing improvements. Last but not least, after algorithm development has finished and marketing aspects emerge, meeting of specifications must be proved. This paper describes a multisensor benchmark which exactly serves this purpose. We first define the underlying algorithm task. Then we explain details about data acquisition and sensor setup and finally we give some insight into quality measures of multisensor data. Currently, the multisensor benchmark described in this paper is applied to the development of basic algorithms for situational awareness, e.g. tracking of individuals in a crowd.

  5. Highlights of 10th plasma chemistry meeting

    NASA Technical Reports Server (NTRS)

    Kitamura, K.; Hashimoto, H.; Hozumi, K.

    1981-01-01

    The chemical structure is given of a film formed by plasma polymerization from pyridine monomers. The film has a hydrophilic chemical structure, its molecular weight is 900, and the molecular system is C55H50N10O3. The electrical characteristics of a plasma polymerized film are described. The film has good insulating properties and was successfully applied as video disc coating. Etching resistance properties make it possible to use the film as a resist in etching. The characteristics of plasma polymer formed from monomers containing tetramethyltin are discussed. The polymer is in film form, displays good adhesiveness, is similar to UV film UV 35 in light absorption and is highly insulating.

  6. 10th Anniversary P.S.

    ScienceCinema

    None

    2016-07-12

    John Adams parle de la préhistoire du P.S. avec présentation des dias. Le DG B.Gregory prend la parole. Les organisateurs présentent sous la direction du "Prof.Ocktette"(?) un sketch très humoristique (p.e.existence de Quark etc.....)

  7. Trafficking in Persons Report 10th Edition

    DTIC Science & Technology

    2010-06-01

    prohibits most forms of human trafficking through its 2007 Organic Law on the Right of Women to a Violence-Free Life . Article 56 of this law...www.timmatsui.com 20: REUTERS 21: (left) Jacqueline Abromeit/bigStockPhoto.com 21: ( right ) Environmental Justice Foundation/ www.ejfoundation.org 22: Jacques-Jean...prosecuted, and more instances of this human rights abuse have been pre- vented. Countries that once denied the existence of human trafficking now work

  8. 10th International Meeting on Cholinesterases

    DTIC Science & Technology

    2009-10-01

    the cholinesterase field. This trend hopefully will persist in the future at next meeting, the 11th Meeting on Cholinesterases, which will be held in...BY PYRIDOSTIGMINE BROMIDE OF MARMOSET HEMI-DIAPHRAGM FUNCTION AND ACETYLCHOLINESTERASE ACTIVITY AFTER SOMAN EXPOSURE Yang Gao (Rochester, USA

  9. Part C Updates. 10th Edition

    ERIC Educational Resources Information Center

    Goode, Sue; Lazara, Alex; Danaher, Joan

    2008-01-01

    "Part C Updates" is a compilation of information on various aspects of the Early Intervention Program for Infants and Toddlers with Disabilities (Part C) of the Individuals with Disabilities Education Act (IDEA). This is the tenth volume in a series of compilations, which included two editions of Part H Updates, the former name of the…

  10. 10th Anniversary P.S.

    SciTech Connect

    2005-10-28

    John Adams parle de la préhistoire du P.S. avec présentation des dias. Le DG B.Gregory prend la parole. Les organisateurs présentent sous la direction du "Prof.Ocktette"(?) un sketch très humoristique (p.e.existence de Quark etc.....)

  11. Children's Budget 2016. 10th Anniversary Edition

    ERIC Educational Resources Information Center

    Monsif, John, Ed.; Gluck, Elliott, Ed.

    2016-01-01

    Federal spending dedicated to children represents just 7.83 percent of the federal budget in fiscal year 2016, and total spending on children's programs has decreased by five percent in the last two years, according to "Children's Budget 2016." The federal government makes more than 200 distinct investments in children. These include…

  12. The NAS Parallel Benchmarks 2.1 Results

    NASA Technical Reports Server (NTRS)

    Saphir, William; Woo, Alex; Yarrow, Maurice

    1996-01-01

    We present performance results for version 2.1 of the NAS Parallel Benchmarks (NPB) on the following architectures: IBM SP2/66 MHz; SGI Power Challenge Array/90 MHz; Cray Research T3D; and Intel Paragon. The NAS Parallel Benchmarks are a widely-recognized suite of benchmarks originally designed to compare the performance of highly parallel computers with that of traditional supercomputers.

  13. Criticality safety benchmark experiments derived from ANL ZPR assemblies.

    SciTech Connect

    Schaefer, R. W.; Lell, R. M.; McKnight, R. D.

    2003-09-01

    Numerous criticality safety benchmarks have been, and continue to be, developed from experiments performed on Argonne National Laboratory's plate-type fast critical assemblies. The nature and scope of assemblies suitable for deriving these benchmarks are discussed. The benchmark derivation process, including full treatment of all significant uncertainties, is explained. Calculational results are presented that support the small uncertainty assigned to the key derivation step in which complex geometric detail is removed.

  14. Assessment: How Do I "Grade" without Grades?

    ERIC Educational Resources Information Center

    Glazer, Susan Mandel

    1993-01-01

    Examines the A through F system of letter grades used in most schools, suggesting reasons why this framework is inadequate. Proposes a new assessment model which has children demonstrate that they can accomplish a given task on their own. (MDM)

  15. Hospital Energy Benchmarking Guidance - Version 1.0

    SciTech Connect

    Singer, Brett C.

    2009-09-08

    This document describes an energy benchmarking framework for hospitals. The document is organized as follows. The introduction provides a brief primer on benchmarking and its application to hospitals. The next two sections discuss special considerations including the identification of normalizing factors. The presentation of metrics is preceded by a description of the overall framework and the rationale for the grouping of metrics. Following the presentation of metrics, a high-level protocol is provided. The next section presents draft benchmarks for some metrics; benchmarks are not available for many metrics owing to a lack of data. This document ends with a list of research needs for further development.

  16. Using benchmarks for radiation testing of microprocessors and FPGAs

    SciTech Connect

    Quinn, Heather; Robinson, William H.; Rech, Paolo; Aguirre, Miguel; Barnard, Arno; Desogus, Marco; Entrena, Luis; Garcia-Valderas, Mario; Guertin, Steven M.; Kaeli, David; Kastensmidt, Fernanda Lima; Kiddie, Bradley T.; Sanchez-Clemente, Antonio; Reorda, Matteo Sonza; Sterpone, Luca; Wirthlin, Michael

    2015-12-01

    Performance benchmarks have been used over the years to compare different systems. These benchmarks can be useful for researchers trying to determine how changes to the technology, architecture, or compiler affect the system's performance. No such standard exists for systems deployed into high radiation environments, making it difficult to assess whether changes in the fabrication process, circuitry, architecture, or software affect reliability or radiation sensitivity. In this paper, we propose a benchmark suite for high-reliability systems that is designed for field-programmable gate arrays and microprocessors. As a result, we describe the development process and report neutron test data for the hardware and software benchmarks.

  17. Performance Evaluation of Supercomputers using HPCC and IMB Benchmarks

    NASA Technical Reports Server (NTRS)

    Saini, Subhash; Ciotti, Robert; Gunney, Brian T. N.; Spelce, Thomas E.; Koniges, Alice; Dossa, Don; Adamidis, Panagiotis; Rabenseifner, Rolf; Tiyyagura, Sunil R.; Mueller, Matthias; Fatoohi, Rod

    2006-01-01

    The HPC Challenge (HPCC) benchmark suite and the Intel MPI Benchmark (IMB) are used to compare and evaluate the combined performance of processor, memory subsystem and interconnect fabric of five leading supercomputers - SGI Altix BX2, Cray XI, Cray Opteron Cluster, Dell Xeon cluster, and NEC SX-8. These five systems use five different networks (SGI NUMALINK4, Cray network, Myrinet, InfiniBand, and NEC IXS). The complete set of HPCC benchmarks are run on each of these systems. Additionally, we present Intel MPI Benchmarks (IMB) results to study the performance of 11 MPI communication functions on these systems.

  18. A Field-Based Aquatic Life Benchmark for Conductivity in ...

    EPA Pesticide Factsheets

    EPA announced the availability of the final report, A Field-Based Aquatic Life Benchmark for Conductivity in Central Appalachian Streams. This report describes a method to characterize the relationship between the extirpation (the effective extinction) of invertebrate genera and salinity (measured as conductivity) and from that relationship derives a freshwater aquatic life benchmark. This benchmark of 300 µS/cm may be applied to waters in Appalachian streams that are dominated by calcium and magnesium salts of sulfate and bicarbonate at circum-neutral to mildly alkaline pH. This report provides scientific evidence for a conductivity benchmark in a specific region rather than for the entire United States.

  19. Aiming High and Falling Short: California's Eighth-Grade Algebra-for-All Effort

    ERIC Educational Resources Information Center

    Domina, Thurston; McEachin, Andrew; Penner, Andrew; Penner, Emily

    2015-01-01

    The United States is in the midst of an effort to intensify middle school mathematics curricula by enrolling more 8th graders in Algebra. California is at the forefront of this effort, and in 2008, the state moved to make Algebra the accountability benchmark test for 8th-grade mathematics. This article takes advantage of this unevenly implemented…

  20. Student Performance on the 2013 Assessment Program in Primary Reading (Kindergarten to Grade 2). Memorandum

    ERIC Educational Resources Information Center

    Sanderson, Geoffrey T.

    2013-01-01

    This evaluation report provides details for school year 2013 on the percentage of primary students who met or exceeded grade-level reading benchmarks on the Montgomery County (Maryland) Public Schools' (MCPS) Assessment Program in Primary Reading (AP-PR). The percentage remained above 90% for kindergarten students and showed a small decrease for…

  1. The Development of CBM Vocabulary Measures: Grade 6. Technical Report # 1213

    ERIC Educational Resources Information Center

    Alonzo, Julie; Anderson, Daniel; Park, Bitnara Jasmine; Tindal, Gerald

    2012-01-01

    In this technical report, we describe the development and piloting of a series of vocabulary assessments intended for use with students in grades two through eight. These measures, available as part of easyCBM[TM], an online progress monitoring and benchmark/screening assessment system, were developed in 2010 and administered to approximately 1200…

  2. The Development of CBM Vocabulary Measures: Grade 5. Technical Report #1212

    ERIC Educational Resources Information Center

    Alonzo, Julie; Anderson, Daniel; Park, Bitnara Jasmine; Tindal, Gerald

    2012-01-01

    In this technical report, we describe the development and piloting of a series of vocabulary assessments intended for use with students in grades two through eight. These measures, available as part of easyCBM[TM], an online progress monitoring and benchmark/screening assessment system, were developed in 2010 and administered to approximately 1200…

  3. The Development of CBM Vocabulary Measures: Grade 7. Technical Report #1214

    ERIC Educational Resources Information Center

    Alonzo, Julie; Anderson, Daniel; Park, Bitnara Jasmine; Tindal, Gerald

    2012-01-01

    In this technical report, we describe the development and piloting of a series of vocabulary assessments intended for use with students in grades two through eight. These measures, available as part of easyCBM[TM], an online progress monitoring and benchmark/screening assessment system, were developed in 2010 and administered to approximately 1200…

  4. The Development of CBM Vocabulary Measures: Grade 3. Technical Report #1210

    ERIC Educational Resources Information Center

    Alonzo, Julie; Anderson, Daniel; Park, Bitnara Jasmine; Tindal, Gerald

    2012-01-01

    In this technical report, we describe the development and piloting of a series of vocabulary assessments intended for use with students in grades two through eight. These measures, available as part of easyCBM[TM], an online progress monitoring and benchmark/screening assessment system, were developed in 2010 and administered to approximately 1200…

  5. The Development of CBM Vocabulary Measures: Grade 4. Technical Report #1211

    ERIC Educational Resources Information Center

    Alonzo, Julie; Anderson, Daniel; Park, Bitnara Jasmine; Tindal, Gerald

    2012-01-01

    In this technical report, we describe the development and piloting of a series of vocabulary assessments intended for use with students in grades two through eight. These measures, available as part of easyCBM[TM], an online progress monitoring and benchmark/screening assessment system, were developed in 2010 and administered to approximately 1200…

  6. The Development of CBM Vocabulary Measures: Grade 2. Technical Report #1209

    ERIC Educational Resources Information Center

    Alonzo, Julie; Anderson, Daniel; Park, Bitnara Jasmine; Tindal, Gerald

    2012-01-01

    In this technical report, we describe the development and piloting of a series of vocabulary assessments intended for use with students in grades two through eight. These measures, available as part of easyCBM[TM], an online progress monitoring and benchmark/screening assessment system, were developed in 2010 and administered to approximately 1200…

  7. The Development of CBM Vocabulary Measures: Grade 8. Technical Report #1215

    ERIC Educational Resources Information Center

    Alonzo, Julie; Anderson, Daniel; Park, Bitnara Jasmine; Tindal, Gerald

    2012-01-01

    In this technical report, we describe the development and piloting of a series of vocabulary assessments intended for use with students in grades two through eight. These measures, available as part of easyCBM[TM], an online progress monitoring and benchmark/screening assessment system, were developed in 2010 and administered to approximately 1200…

  8. Why 12th Grade Must Be Redesigned Now--and How

    ERIC Educational Resources Information Center

    Vargas, Joel

    2015-01-01

    This first report in a new series by Jobs For the Future (JFF) provides the rationale for restructuring 12th grade and tying it more tightly to the first year of college through new high school and college partnerships. The paper proposes a new common benchmark of readiness that high schools and colleges can work together to meet to ensure…

  9. NASA Indexing Benchmarks: Evaluating Text Search Engines

    NASA Technical Reports Server (NTRS)

    Esler, Sandra L.; Nelson, Michael L.

    1997-01-01

    The current proliferation of on-line information resources underscores the requirement for the ability to index collections of information and search and retrieve them in a convenient manner. This study develops criteria for analytically comparing the index and search engines and presents results for a number of freely available search engines. A product of this research is a toolkit capable of automatically indexing, searching, and extracting performance statistics from each of the focused search engines. This toolkit is highly configurable and has the ability to run these benchmark tests against other engines as well. Results demonstrate that the tested search engines can be grouped into two levels. Level one engines are efficient on small to medium sized data collections, but show weaknesses when used for collections 100MB or larger. Level two search engines are recommended for data collections up to and beyond 100MB.

  10. Benchmark cyclic plastic notch strain measurements

    NASA Technical Reports Server (NTRS)

    Sharpe, W. N., Jr.; Ward, M.

    1983-01-01

    Plastic strains at the roots of notched specimens of Inconel 718 subjected to tension-compression cycling at 650 C are reported. These strains were measured with a laser-based technique over a gage length of 0.1 mm and are intended to serve as 'benchmark' data for further development of experimental, analytical, and computational approaches. The specimens were 250 mm by 2.5 mm in the test section with double notches of 4.9 mm radius subjected to axial loading sufficient to cause yielding at the notch root on the tensile portion of the first cycle. The tests were run for 1000 cycles at 10 cpm or until cracks initiated at the notch root. The experimental techniques are described, and then representative data for the various load spectra are presented. All the data for each cycle of every test are available on floppy disks from NASA.

  11. Benchmarking finite- β ITG gyrokinetic simulations

    NASA Astrophysics Data System (ADS)

    Nevins, W. M.; Dimits, A. M.; Candy, J.; Holland, C.; Howard, N.

    2016-10-01

    We report the results of an electromagnetic gyrokinetic-simulation benchmarking study based on a well-diagnosed ion-temperature-gradient (ITG)-turbulence dominated experimental plasma. We compare the 4x3 matrix of transport/transfer quantities for each plasma species; namely the (a) particle flux, Γa, (b) momentum flux, Πa, (c) energy flux, Qa, and (d) anomalous heat exchange, Sa, with each transport coefficient broken down into: (1) electrostatic (δφ) (2) transverse electromagnetic (δA∥) , and (3) compressional electromagnetic, (δB∥) contributions. We compare realization-independent quantities (correlation functions, spectral densities, etc.), which characterize the fluctuating fields from various gyrokinetic simulation codes. Prepared for US DOE by LLNL under Contract DE-AC52-07NA27344 and by GA under Contract DE-FG03-95ER54309. This work was supported by the U.S. DOE, Office of Science, Fusion Energy Sciences.

  12. Gatemon Benchmarking and Two-Qubit Operation

    NASA Astrophysics Data System (ADS)

    Casparis, Lucas; Larsen, Thorvald; Olsen, Michael; Petersson, Karl; Kuemmeth, Ferdinand; Krogstrup, Peter; Nygard, Jesper; Marcus, Charles

    Recent experiments have demonstrated superconducting transmon qubits with semiconductor nanowire Josephson junctions. These hybrid gatemon qubits utilize field effect tunability singular to semiconductors to allow complete qubit control using gate voltages, potentially a technological advantage over conventional flux-controlled transmons. Here, we present experiments with a two-qubit gatemon circuit. We characterize qubit coherence and stability and use randomized benchmarking to demonstrate single-qubit gate errors of ~0.5 % for all gates, including voltage-controlled Z rotations. We show coherent capacitive coupling between two gatemons and coherent SWAP operations. Finally, we perform a two-qubit controlled-phase gate with an estimated fidelity of ~91 %, demonstrating the potential of gatemon qubits for building scalable quantum processors. We acknowledge financial support from Microsoft Project Q and the Danish National Research Foundation.

  13. Correlation of Parental Involvement on Students' Benchmarks among Urban Fourth-Grade Elementary Students

    ERIC Educational Resources Information Center

    Rutledge, Erika L.

    2013-01-01

    The No Child Left Behind Act of 2001 raised awareness of the educational quality gap between European Americans and minorities, especially in reading. This gap underscores the importance that students master basic reading skills in order to achieve higher levels of reading proficiency. The purpose of this quantitative quasi-experimental study was…

  14. A Simple Alternative to Grading

    ERIC Educational Resources Information Center

    Potts, Glenda

    2010-01-01

    In this article, the author investigates whether an alternative grading system (contract grading) would yield the same final grades as traditional grading (letter grading), and whether or not it would be accepted by students. The author states that this study demonstrated that contract grading was widely, and for the most part, enthusiastically…

  15. Performance Evaluation and Benchmarking of Intelligent Systems

    SciTech Connect

    Madhavan, Raj; Messina, Elena; Tunstel, Edward

    2009-09-01

    To design and develop capable, dependable, and affordable intelligent systems, their performance must be measurable. Scientific methodologies for standardization and benchmarking are crucial for quantitatively evaluating the performance of emerging robotic and intelligent systems technologies. There is currently no accepted standard for quantitatively measuring the performance of these systems against user-defined requirements; and furthermore, there is no consensus on what objective evaluation procedures need to be followed to understand the performance of these systems. The lack of reproducible and repeatable test methods has precluded researchers working towards a common goal from exchanging and communicating results, inter-comparing system performance, and leveraging previous work that could otherwise avoid duplication and expedite technology transfer. Currently, this lack of cohesion in the community hinders progress in many domains, such as manufacturing, service, healthcare, and security. By providing the research community with access to standardized tools, reference data sets, and open source libraries of solutions, researchers and consumers will be able to evaluate the cost and benefits associated with intelligent systems and associated technologies. In this vein, the edited book volume addresses performance evaluation and metrics for intelligent systems, in general, while emphasizing the need and solutions for standardized methods. To the knowledge of the editors, there is not a single book on the market that is solely dedicated to the subject of performance evaluation and benchmarking of intelligent systems. Even books that address this topic do so only marginally or are out of date. The research work presented in this volume fills this void by drawing from the experiences and insights of experts gained both through theoretical development and practical implementation of intelligent systems in a variety of diverse application domains. The book presents

  16. Benchmarking analogue models of brittle thrust wedges

    NASA Astrophysics Data System (ADS)

    Schreurs, Guido; Buiter, Susanne J. H.; Boutelier, Jennifer; Burberry, Caroline; Callot, Jean-Paul; Cavozzi, Cristian; Cerca, Mariano; Chen, Jian-Hong; Cristallini, Ernesto; Cruden, Alexander R.; Cruz, Leonardo; Daniel, Jean-Marc; Da Poian, Gabriela; Garcia, Victor H.; Gomes, Caroline J. S.; Grall, Céline; Guillot, Yannick; Guzmán, Cecilia; Hidayah, Triyani Nur; Hilley, George; Klinkmüller, Matthias; Koyi, Hemin A.; Lu, Chia-Yu; Maillot, Bertrand; Meriaux, Catherine; Nilfouroushan, Faramarz; Pan, Chang-Chih; Pillot, Daniel; Portillo, Rodrigo; Rosenau, Matthias; Schellart, Wouter P.; Schlische, Roy W.; Take, Andy; Vendeville, Bruno; Vergnaud, Marine; Vettori, Matteo; Wang, Shih-Hsien; Withjack, Martha O.; Yagupsky, Daniel; Yamada, Yasuhiro

    2016-11-01

    We performed a quantitative comparison of brittle thrust wedge experiments to evaluate the variability among analogue models and to appraise the reproducibility and limits of model interpretation. Fifteen analogue modeling laboratories participated in this benchmark initiative. Each laboratory received a shipment of the same type of quartz and corundum sand and all laboratories adhered to a stringent model building protocol and used the same type of foil to cover base and sidewalls of the sandbox. Sieve structure, sifting height, filling rate, and details on off-scraping of excess sand followed prescribed procedures. Our analogue benchmark shows that even for simple plane-strain experiments with prescribed stringent model construction techniques, quantitative model results show variability, most notably for surface slope, thrust spacing and number of forward and backthrusts. One of the sources of the variability in model results is related to slight variations in how sand is deposited in the sandbox. Small changes in sifting height, sifting rate, and scraping will result in slightly heterogeneous material bulk densities, which will affect the mechanical properties of the sand, and will result in lateral and vertical differences in peak and boundary friction angles, as well as cohesion values once the model is constructed. Initial variations in basal friction are inferred to play the most important role in causing model variability. Our comparison shows that the human factor plays a decisive role, and even when one modeler repeats the same experiment, quantitative model results still show variability. Our observations highlight the limits of up-scaling quantitative analogue model results to nature or for making comparisons with numerical models. The frictional behavior of sand is highly sensitive to small variations in material state or experimental set-up, and hence, it will remain difficult to scale quantitative results such as number of thrusts, thrust spacing

  17. Benchmarking homogenization algorithms for monthly data

    NASA Astrophysics Data System (ADS)

    Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M. J.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.

    2012-01-01

    The COST (European Cooperation in Science and Technology) Action ES0601: advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random independent break-type inhomogeneities with normally distributed breakpoint sizes were added to the simulated datasets. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study. After the deadline at which details of the imposed inhomogeneities were revealed, 22 additional solutions were submitted. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data

  18. Benchmarking and testing the "Sea Level Equation

    NASA Astrophysics Data System (ADS)

    Spada, G.; Barletta, V. R.; Klemann, V.; van der Wal, W.; James, T. S.; Simon, K.; Riva, R. E. M.; Martinec, Z.; Gasperini, P.; Lund, B.; Wolf, D.; Vermeersen, L. L. A.; King, M. A.

    2012-04-01

    The study of the process of Glacial Isostatic Adjustment (GIA) and of the consequent sea level variations is gaining an increasingly important role within the geophysical community. Understanding the response of the Earth to the waxing and waning ice sheets is crucial in various contexts, ranging from the interpretation of modern satellite geodetic measurements to the projections of future sea level trends in response to climate change. All the processes accompanying GIA can be described solving the so-called Sea Level Equation (SLE), an integral equation that accounts for the interactions between the ice sheets, the solid Earth, and the oceans. Modern approaches to the SLE are based on various techniques that range from purely analytical formulations to fully numerical methods. Despite various teams independently investigating GIA, we do not have a suitably large set of agreed numerical results through which the methods may be validated. Following the example of the mantle convection community and our recent successful Benchmark for Post Glacial Rebound codes (Spada et al., 2011, doi: 10.1111/j.1365-246X.2011.04952.x), here we present the results of a benchmark study of independently developed codes designed to solve the SLE. This study has taken place within a collaboration facilitated through the European Cooperation in Science and Technology (COST) Action ES0701. The tests involve predictions of past and current sea level variations, and 3D deformations of the Earth surface. In spite of the signi?cant differences in the numerical methods employed, the test computations performed so far show a satisfactory agreement between the results provided by the participants. The differences found, which can be often attributed to the different numerical algorithms employed within the community, help to constrain the intrinsic errors in model predictions. These are of fundamental importance for a correct interpretation of the geodetic variations observed today, and

  19. Benchmarking Competitiveness: Is America's Technological Hegemony Waning?

    NASA Astrophysics Data System (ADS)

    Lubell, Michael S.

    2006-03-01

    For more than half a century, by almost every standard, the United States has been the world's leader in scientific discovery, innovation and technological competitiveness. To a large degree, that dominant position stemmed from the circumstances our nation inherited at the conclusion of the World War Two: we were, in effect, the only major nation left standing that did not have to repair serious war damage. And we found ourselves with an extraordinary science and technology base that we had developed for military purposes. We had the laboratories -- industrial, academic and government -- as well as the scientific and engineering personnel -- many of them immigrants who had escaped from war-time Europe. What remained was to convert the wartime machinery into peacetime uses. We adopted private and public policies that accomplished the transition remarkably well, and we have prospered ever since. Our higher education system, our protection of intellectual property rights, our venture capital system, our entrepreneurial culture and our willingness to commit government funds for the support of science and engineering have been key components to our success. But recent competitiveness benchmarks suggest that our dominance is waning rapidly, in part because other nations have begun to emulate our successful model, in part because globalization has ``flattened'' the world and in part because we have been reluctant to pursue the public policies that are necessary to ensure our leadership. We will examine these benchmarks and explore the policy changes that are needed to keep our nation's science and technology enterprise vibrant and our economic growth on an upward trajectory.

  20. Benchmark Evaluation of HTR-PROTEUS Pebble Bed Experimental Program

    SciTech Connect

    Bess, John D.; Montierth, Leland; Köberl, Oliver; Snoj, Luka

    2014-10-09

    Benchmark models were developed to evaluate 11 critical core configurations of the HTR-PROTEUS pebble bed experimental program. Various additional reactor physics measurements were performed as part of this program; currently only a total of 37 absorber rod worth measurements have been evaluated as acceptable benchmark experiments for Cores 4, 9, and 10. Dominant uncertainties in the experimental keff for all core configurations come from uncertainties in the ²³⁵U enrichment of the fuel, impurities in the moderator pebbles, and the density and impurity content of the radial reflector. Calculations of keff with MCNP5 and ENDF/B-VII.0 neutron nuclear data are greater than the benchmark values but within 1% and also within the 3σ uncertainty, except for Core 4, which is the only randomly packed pebble configuration. Repeated calculations of keff with MCNP6.1 and ENDF/B-VII.1 are lower than the benchmark values and within 1% (~3σ) except for Cores 5 and 9, which calculate lower than the benchmark eigenvalues within 4σ. The primary difference between the two nuclear data libraries is the adjustment of the absorption cross section of graphite. Simulations of the absorber rod worth measurements are within 3σ of the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments.

  1. 47 CFR 69.108 - Transport rate benchmark.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Transport rate benchmark. 69.108 Section 69.108... Computation of Charges § 69.108 Transport rate benchmark. (a) For transport charges computed in accordance... interoffice transmission using the telephone company's DS1 special access rates. (b) Initial transport...

  2. Benchmarking with the BLASST Sessional Staff Standards Framework

    ERIC Educational Resources Information Center

    Luzia, Karina; Harvey, Marina; Parker, Nicola; McCormack, Coralie; Brown, Natalie R.

    2013-01-01

    Benchmarking as a type of knowledge-sharing around good practice within and between institutions is increasingly common in the higher education sector. More recently, benchmarking as a process that can contribute to quality enhancement has been deployed across numerous institutions with a view to systematising frameworks to assure and enhance the…

  3. Practical Considerations when Using Benchmarking for Accountability in Higher Education

    ERIC Educational Resources Information Center

    Achtemeier, Sue D.; Simpson, Ronald D.

    2005-01-01

    The qualitative study on which this article is based examined key individuals' perceptions, both within a research university community and beyond in its external governing board, of how to improve benchmarking as an accountability method in higher education. Differing understanding of benchmarking revealed practical implications for using it as…

  4. What Are the ACT College Readiness Benchmarks? Information Brief

    ERIC Educational Resources Information Center

    ACT, Inc., 2013

    2013-01-01

    The ACT College Readiness Benchmarks are the minimum ACT® college readiness assessment scores required for students to have a high probability of success in credit-bearing college courses--English Composition, social sciences courses, College Algebra, or Biology. This report identifies the College Readiness Benchmarks on the ACT Compass scale…

  5. Presidential Address 1997--Benchmarks for the Next Millennium.

    ERIC Educational Resources Information Center

    Baker, Pamela C.

    1997-01-01

    Reflects on the century's preeminent benchmarks, including the evolution in the lives of people with disabilities and the prevention of many causes of mental retardation. The ethical challenges of genetic engineering and diagnostic technology and the need for new benchmarks in policy, practice, and research are discussed. (CR)

  6. Benchmark Evaluation of HTR-PROTEUS Pebble Bed Experimental Program

    DOE PAGES

    Bess, John D.; Montierth, Leland; Köberl, Oliver; ...

    2014-10-09

    Benchmark models were developed to evaluate 11 critical core configurations of the HTR-PROTEUS pebble bed experimental program. Various additional reactor physics measurements were performed as part of this program; currently only a total of 37 absorber rod worth measurements have been evaluated as acceptable benchmark experiments for Cores 4, 9, and 10. Dominant uncertainties in the experimental keff for all core configurations come from uncertainties in the ²³⁵U enrichment of the fuel, impurities in the moderator pebbles, and the density and impurity content of the radial reflector. Calculations of keff with MCNP5 and ENDF/B-VII.0 neutron nuclear data are greatermore » than the benchmark values but within 1% and also within the 3σ uncertainty, except for Core 4, which is the only randomly packed pebble configuration. Repeated calculations of keff with MCNP6.1 and ENDF/B-VII.1 are lower than the benchmark values and within 1% (~3σ) except for Cores 5 and 9, which calculate lower than the benchmark eigenvalues within 4σ. The primary difference between the two nuclear data libraries is the adjustment of the absorption cross section of graphite. Simulations of the absorber rod worth measurements are within 3σ of the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments.« less

  7. Constructing Benchmarks for Monitoring Purposes: Evidence from South Africa

    ERIC Educational Resources Information Center

    Scherman, Vanessa; Howie, Sarah J.; Bosker, Roel J.

    2011-01-01

    In information-rich environments, schools are often presented with a myriad of data from which decisions need to be made. The use of the information on a classroom level may be facilitated if performance could be described in terms of levels of proficiency or benchmarks. The aim of this article is to explore benchmarks using data from a monitoring…

  8. Teaching Benchmark Strategy for Fifth-Graders in Taiwan

    ERIC Educational Resources Information Center

    Yang, Der-Ching; Lai, M. L.

    2013-01-01

    The key purpose of this study was how we taught the use of benchmark strategy when comparing fraction for fifth-graders in Taiwan. 26 fifth graders from a public elementary in south Taiwan were selected to join this study. Results of this case study showed that students had a much progress on the use of benchmark strategy when comparing fraction…

  9. Benchmark problems for numerical implementations of phase field models

    SciTech Connect

    Jokisaari, A. M.; Voorhees, P. W.; Guyer, J. E.; Warren, J.; Heinonen, O. G.

    2016-10-01

    Here, we present the first set of benchmark problems for phase field models that are being developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST). While many scientific research areas use a limited set of well-established software, the growing phase field community continues to develop a wide variety of codes and lacks benchmark problems to consistently evaluate the numerical performance of new implementations. Phase field modeling has become significantly more popular as computational power has increased and is now becoming mainstream, driving the need for benchmark problems to validate and verify new implementations. We follow the example set by the micromagnetics community to develop an evolving set of benchmark problems that test the usability, computational resources, numerical capabilities and physical scope of phase field simulation codes. In this paper, we propose two benchmark problems that cover the physics of solute diffusion and growth and coarsening of a second phase via a simple spinodal decomposition model and a more complex Ostwald ripening model. We demonstrate the utility of benchmark problems by comparing the results of simulations performed with two different adaptive time stepping techniques, and we discuss the needs of future benchmark problems. The development of benchmark problems will enable the results of quantitative phase field models to be confidently incorporated into integrated computational materials science and engineering (ICME), an important goal of the Materials Genome Initiative.

  10. Using benchmarking data to determine vascular access device selection.

    PubMed

    Galloway, Margy

    2002-01-01

    Benchmarking data has validated that patients with planned vascular access device (VAD) placement have fewer device placements, less difficulty with device insertions, fewer venipunctures, earlier assessment for placement of central VADs, and shorter hospital stays. This article will discuss VAD program planning, early assessment for VAD selection, and benchmarking of program data used to achieve positive infusion-related outcomes.

  11. Integrating the Nqueens Algorithm into a Parameterized Benchmark Suite

    DTIC Science & Technology

    2016-02-01

    ARL-TR-7585 ● FEB 2016 US Army Research Laboratory Integrating the Nqueens Algorithm into a Parameterized Benchmark Suite by...the Nqueens Algorithm into a Parameterized Benchmark Suite by Jamie K Infantolino and Mikayla Malley Computational and Information Sciences...

  12. Benchmark problems for numerical implementations of phase field models

    DOE PAGES

    Jokisaari, A. M.; Voorhees, P. W.; Guyer, J. E.; ...

    2016-10-01

    Here, we present the first set of benchmark problems for phase field models that are being developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST). While many scientific research areas use a limited set of well-established software, the growing phase field community continues to develop a wide variety of codes and lacks benchmark problems to consistently evaluate the numerical performance of new implementations. Phase field modeling has become significantly more popular as computational power has increased and is now becoming mainstream, driving the need for benchmark problems to validate and verifymore » new implementations. We follow the example set by the micromagnetics community to develop an evolving set of benchmark problems that test the usability, computational resources, numerical capabilities and physical scope of phase field simulation codes. In this paper, we propose two benchmark problems that cover the physics of solute diffusion and growth and coarsening of a second phase via a simple spinodal decomposition model and a more complex Ostwald ripening model. We demonstrate the utility of benchmark problems by comparing the results of simulations performed with two different adaptive time stepping techniques, and we discuss the needs of future benchmark problems. The development of benchmark problems will enable the results of quantitative phase field models to be confidently incorporated into integrated computational materials science and engineering (ICME), an important goal of the Materials Genome Initiative.« less

  13. Writing Better Goals and Short-Term Objectives or Benchmarks.

    ERIC Educational Resources Information Center

    Lignugaris/Kraft, Benjamin; Marchand-Martella, Nancy; Martella, Ronald C.

    2001-01-01

    This article provides strategies for writing precise goals and short-term objectives or benchmarks as part of individualized education programs (IEPs). Guidelines and examples address: definitions, reasons for clarity and precision, individual parts of goals and objectives, inclusion of time factors in objectives and benchmarks, number of…

  14. The Snowmass points and slopes: Benchmarks for SUSY searches

    SciTech Connect

    M. Battaglia et al.

    2002-03-04

    The ''Snowmass Points and Slopes'' (SPS) are a set of benchmark points and parameter lines in the MSSM parameter space corresponding to different scenarios in the search for Supersymmetry at present and future experiments. This set of benchmarks was agreed upon at the 2001 ''Snowmass Workshop on the Future of Particle Physics'' as a consensus based on different existing proposals.

  15. Nomenclatural Benchmarking: The roles of digital typification and telemicroscopy

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The process of nomenclatural benchmarking is the examination of type specimens of all available names to ascertain which currently accepted species the specimen bearing the name falls within. We propose a strategy for addressing four challenges for nomenclatural benchmarking. First, there is the mat...

  16. Higher Education Ranking and Leagues Tables: Lessons Learned from Benchmarking

    ERIC Educational Resources Information Center

    Proulx, Roland

    2007-01-01

    The paper intends to contribute to the debate on ranking and league tables by adopting a critical approach to ranking methodologies from the point of view of a university benchmarking exercise. The absence of a strict benchmarking exercise in the ranking process has been, in the opinion of the author, one of the major problems encountered in the…

  17. The Geography Benchmark: A View from the USA.

    ERIC Educational Resources Information Center

    Fournier, Eric J.

    2000-01-01

    Focuses on the benchmark statements of the Quality Assurance Agency for Higher Education in the United Kingdom. Addresses the document "Geography for Life: The National Geography Standards" created within the United States. Believes that the benchmark statement is useful for geographers within the United States. (CMK)

  18. Benchmarks for Psychotherapy Efficacy in Adult Major Depression

    ERIC Educational Resources Information Center

    Minami, Takuya; Wampold, Bruce E.; Serlin, Ronald C.; Kircher, John C.; Brown, George S.

    2007-01-01

    This study estimates pretreatment-posttreatment effect size benchmarks for the treatment of major depression in adults that may be useful in evaluating psychotherapy effectiveness in clinical practice. Treatment efficacy benchmarks for major depression were derived for 3 different types of outcome measures: the Hamilton Rating Scale for Depression…

  19. A new numerical benchmark of a freshwater lens

    NASA Astrophysics Data System (ADS)

    Stoeckl, L.; Walther, M.; Graf, T.

    2016-04-01

    A numerical benchmark for 2-D variable-density flow and solute transport in a freshwater lens is presented. The benchmark is based on results of laboratory experiments conducted by Stoeckl and Houben (2012) using a sand tank on the meter scale. This benchmark describes the formation and degradation of a freshwater lens over time as it can be found under real-world islands. An error analysis gave the appropriate spatial and temporal discretization of 1 mm and 8.64 s, respectively. The calibrated parameter set was obtained using the parameter estimation tool PEST. Comparing density-coupled and density-uncoupled results showed that the freshwater-saltwater interface position is strongly dependent on density differences. A benchmark that adequately represents saltwater intrusion and that includes realistic features of coastal aquifers or freshwater lenses was lacking. This new benchmark was thus developed and is demonstrated to be suitable to test variable-density groundwater models applied to saltwater intrusion investigations.

  20. BENCHMARKING ORTEC ISOTOPIC MEASUREMENTS AND CALCULATIONS

    SciTech Connect

    Dewberry, R; Raymond Sigg, R; Vito Casella, V; Nitin Bhatt, N

    2008-09-29

    This report represents a description of compiled benchmark tests conducted to probe and to demonstrate the extensive utility of the Ortec ISOTOPIC {gamma}-ray analysis computer program. The ISOTOPIC program performs analyses of {gamma}-ray spectra applied to specific acquisition configurations in order to apply finite-geometry correction factors and sample-matrix-container photon absorption correction factors. The analysis program provides an extensive set of preset acquisition configurations to which the user can add relevant parameters in order to build the geometry and absorption correction factors that the program determines from calculus and from nuclear g-ray absorption and scatter data. The Analytical Development Section field nuclear measurement group of the Savannah River National Laboratory uses the Ortec ISOTOPIC analysis program extensively for analyses of solid waste and process holdup applied to passive {gamma}-ray acquisitions. Frequently the results of these {gamma}-ray acquisitions and analyses are to determine compliance with facility criticality safety guidelines. Another use of results is to designate 55-gallon drum solid waste as qualified TRU waste3 or as low-level waste. Other examples of the application of the ISOTOPIC analysis technique to passive {gamma}-ray acquisitions include analyses of standard waste box items and unique solid waste configurations. In many passive {gamma}-ray acquisition circumstances the container and sample have sufficient density that the calculated energy-dependent transmission correction factors have intrinsic uncertainties in the range 15%-100%. This is frequently the case when assaying 55-gallon drums of solid waste with masses of up to 400 kg and when assaying solid waste in extensive unique containers. Often an accurate assay of the transuranic content of these containers is not required, but rather a good defensible designation as >100 nCi/g (TRU waste) or <100 nCi/g (low level solid waste) is required. In

  1. The influence of physique on dose conversion coefficients for idealised external photon exposures: a comparison of doses for Chinese male phantoms with 10th, 50th and 90th percentile anthropometric parameters.

    PubMed

    Lv, Wei; He, Hengda; Liu, Qian

    2017-03-22

    For evaluating radiation risk, the construction of anthropomorphic computational phantoms with a variety of physiques can help reduce the uncertainty that is due to anatomical variation. In our previous work, three deformable Chinese reference male phantoms with 10th, 50th and 90th percentile body mass indexes and body circumference physiques (DCRM-10, DCRM-50 and DCRM-90) were constructed to represent underweight, normal weight and overweight Chinese adult males, respectively. In the present study, the phantoms were updated by correcting the fat percentage to improve the precision of radiological dosimetry evaluations. The organ dose conversion coefficients for each phantom were calculated and compared for four idealized external photon exposures from 15 keV to 10 MeV, using the Monte Carlo method. The dosimetric results for the three deformable Chinese reference male phantom (DCRM) phantoms indicated that variations in physique can cause as much as a 20% difference in the organ dose conversion coefficients. When the photon energy was <50 keV, the discrepancy was greater. The irradiation geometry and organ position can also affect the difference in radiological dosimetry between individuals with different physiques. Hence, it is difficult to predict the conversion coefficients of the phantoms from the anthropometric parameters alone. Nevertheless, the complex organ conversion coefficients presented in this report will be helpful for evaluating the radiation risk for large groups of people with various physiques.

  2. Benchmark Evaluation of Plutonium Nitrate Solution Arrays

    SciTech Connect

    M. A. Marshall; J. D. Bess

    2011-09-01

    In October and November of 1981 thirteen approach-to-critical experiments were performed on a remote split table machine (RSTM) in the Critical Mass Laboratory of Pacific Northwest Laboratory (PNL) in Richland, Washington, using planar arrays of polyethylene bottles filled with plutonium (Pu) nitrate solution. Arrays of up to sixteen bottles were used to measure the critical number of bottles and critical array spacing with a tight fitting Plexiglas{reg_sign} reflector on all sides of the arrays except the top. Some experiments used Plexiglas shells fitted around each bottles to determine the effect of moderation on criticality. Each bottle contained approximately 2.4 L of Pu(NO3)4 solution with a Pu content of 105 g Pu/L and a free acid molarity H+ of 5.1. The plutonium was of low 240Pu (2.9 wt.%) content. These experiments were performed to fill a gap in experimental data regarding criticality limits for storing and handling arrays of Pu solution in reprocessing facilities. Of the thirteen approach-to-critical experiments eleven resulted in extrapolations to critical configurations. Four of the approaches were extrapolated to the critical number of bottles; these were not evaluated further due to the large uncertainty associated with the modeling of a fraction of a bottle. The remaining seven approaches were extrapolated to critical array spacing of 3-4 and 4-4 arrays; these seven critical configurations were evaluation for inclusion as acceptable benchmark experiments in the International Criticality Safety Benchmark Evaluation Project (ICSBEP) Handbook. Detailed and simple models of these configurations were created and the associated bias of these simplifications was determined to range from 0.00116 and 0.00162 {+-} 0.00006 ?keff. Monte Carlo analysis of all models was completed using MCNP5 with ENDF/BVII.0 neutron cross section libraries. A thorough uncertainty analysis of all critical, geometric, and material parameters was performed using parameter

  3. Benchmarking real-time HEVC streaming

    NASA Astrophysics Data System (ADS)

    Nightingale, James; Wang, Qi; Grecos, Christos

    2012-06-01

    Work towards the standardisation of High Efficiency Video Coding (HEVC), the next generation video coding scheme, is currently gaining pace. HEVC offers the prospect of a 50% improvement in compression over the current H.264 Advanced Video Coding standard (H.264/AVC). Thus far, work on HEVC has concentrated on improvements to the coding efficiency and has not yet addressed transmission in networks other than to mandate byte stream compliance with Annex B of H.264/AVC. For practical networked HEVC applications a number of essential building blocks have yet to be defined. In this work, we design and prototype a real-time HEVC streaming system and empirically evaluate its performance, in particular we consider the robustness of the current Test Model under Consideration (TMuC HM4.0) for HEVC to packet loss caused by a reduction in available bandwidth both in terms of decoder resilience and degradation in perceptual video quality. A NAL unit packetisation and streaming framework for HEVC encoded video streams is designed, implemented and empirically tested in a number of streaming environments including wired, wireless, single path and multiple path network scenarios. As a first step the HEVC decoder's error resilience is tested under a comprehensive set of packet loss conditions and a simple error concealment method for HEVC is implemented. Similarly to H.264 encoded streams, the size and distribution of NAL units within an HEVC stream and the nature of the NAL unit dependencies influences the packetisation and streaming strategies which may be employed for such streams. The relationships between HEVC encoding mode and the quality of the received video are shown under a wide range of bandwidth constraints. HEVC streaming is evaluated in both single and multipath network configuration scenarios. Through the use of extensive experimentation, we establish a comprehensive set of benchmarks for HEVC streaming in loss prone network environments. We show the visual quality

  4. The Model Averaging for Dichotomous Response Benchmark Dose (MADr-BMD) Tool

    EPA Pesticide Factsheets

    Providing quantal response models, which are also used in the U.S. EPA benchmark dose software suite, and generates a model-averaged dose response model to generate benchmark dose and benchmark dose lower bound estimates.

  5. Classroom: Efficient Grading

    ERIC Educational Resources Information Center

    Shaw, David D.; Pease, Leonard F., III.

    2014-01-01

    Grading can be accelerated to make time for more effective instruction. This article presents specific time management strategies selected to decrease administrative time required of faculty and teaching assistants, including a multiple answer multiple choice interface for exams, a three-tier grading system for open ended problem solving, and a…

  6. What Is Fifth Grade?

    ERIC Educational Resources Information Center

    O'Brien, Thomas C.; Wallach, Christine

    2006-01-01

    One of the most consistent regularities observers would see in schools is the grouping of children by grade. The authors' work with schoolchildren causes them to ask, what is a grade beyond a group of children at a particular age? In this article, the authors share a glimpse of an activity involving inference and logical necessity that they…

  7. Controlling Grade Inflation

    ERIC Educational Resources Information Center

    Stanoyevitch, Alexander

    2008-01-01

    In this article concerning grade inflation, the author restricts his attention to the college and university level, although many of the tools and ideas developed here should be useful for high schools as well. The author considers the relationships between grades instructors assign and scores they receive on end-of-the semester student…

  8. Beef grading by ultrasound

    NASA Technical Reports Server (NTRS)

    Gammell, P. M.

    1981-01-01

    Reflections in ultrasonic A-scan signatures of beef carcasses indicate USDA grade. Since reflections from within muscle are determined primarily by fat/muscle interface, richness of signals is direct indication of degree of marbling and quality. Method replaces subjective sight and feel tests by individual graders and is applicable to grade analysis of live cattle.

  9. Middle Grades Ideas.

    ERIC Educational Resources Information Center

    Classroom Computer Learning, 1985

    1985-01-01

    Presents a collection of computer-oriented teaching activities for the middle grades. They focus on Logo activities to sharpen visualization skills, use of spreadsheets, various uses of Apple microcomputer paddles, and writing a program from program output. All activities may be adapted for lower or higher grade levels. (JN)

  10. Growing beyond Grades

    ERIC Educational Resources Information Center

    Perchemlides, Natalia; Coutant, Carolyn

    2004-01-01

    Once students are asked to assess their own writing progress, they will begin to do their best for writing great prose instead of just great grades. Teachers will have to create a grade-free zone, allow students to set their own writing goals, provide a common language such as the Six Traits Model, and provide evaluation and instructional models…

  11. Teaching Middle Grades Science.

    ERIC Educational Resources Information Center

    Georgia State Dept. of Education, Atlanta. Office of Instructional Services.

    Background information and exemplary units for teaching science in Georgia's middle school grades are provided. Discussed in the first section are: (1) the rationale for including science in middle school grades, focusing on science/society/technology, science/social issues, scientific reasoning, and scientific literacy; (2) role of science…

  12. Grain Grading and Handling.

    ERIC Educational Resources Information Center

    Rendleman, Matt; Legacy, James

    This publication provides an introduction to grain grading and handling for adult students in vocational and technical education programs. Organized in five chapters, the booklet provides a brief overview of the jobs performed at a grain elevator and of the techniques used to grade grain. The first chapter introduces the grain industry and…

  13. Making Grades More Meaningful

    ERIC Educational Resources Information Center

    Hochbein, Craig; Pollio, Marty

    2016-01-01

    To expand and improve evidence of grading practices, we seized an opportunity presented by the implementation of standards-based grading practices at 11 high schools in Jefferson County Public Schools in Louisville, Ky. These high-needs schools faced substantial sanctions outlined by recently revised federal and state policies unless they made…

  14. Earth Science, Grade 7.

    ERIC Educational Resources Information Center

    Buffalo Public Schools, NY.

    GRADES OR AGES: Grade 7. SUBJECT MATTER: Earth science. ORGANIZATION AND PHYSICAL APPEARANCE: The introductory material suggests a time schedule for the major units and gives details of the reference materials referred to in the text. The main text is presented in four columns: topical outline, basic understandings, suggested activities and…

  15. Grades out, Badges in

    ERIC Educational Resources Information Center

    Young, Jeffrey R.

    2012-01-01

    Grades are broken. Students grub for them, pick classes where good ones come easily, and otherwise hustle to win the highest scores for the least learning. As a result, college grades are inflated to the point of meaninglessness--especially to employers who want to know which diploma-holder is best qualified for their jobs. An alternative is to…

  16. Third Grade Reading Policies

    ERIC Educational Resources Information Center

    Rose, Stephanie

    2012-01-01

    In 2012, 14 states passed legislation geared toward improving 3rd-grade literacy through identification, intervention, and/or retention initiatives. Today, a total of 32 states and the District of Columbia have policies in statute aimed at improving 3rd-grade reading proficiency. The majority of these states require early assessment and…

  17. Upper Grades Ideas.

    ERIC Educational Resources Information Center

    Classroom Computer Learning, 1985

    1985-01-01

    Describes computer-oriented teaching activities for the upper grades. They focus on the use of databases in history classes, checking taxes, examining aspects of the joystick button on Atari microcomputers, printing control using Logo, and a Logo program that draws whirling squares. All activities can be adapted for lower grades. (JN)

  18. How to Advance TPC Benchmarks with Dependability Aspects

    NASA Astrophysics Data System (ADS)

    Almeida, Raquel; Poess, Meikel; Nambiar, Raghunath; Patil, Indira; Vieira, Marco

    Transactional systems are the core of the information systems of most organizations. Although there is general acknowledgement that failures in these systems often entail significant impact both on the proceeds and reputation of companies, the benchmarks developed and managed by the Transaction Processing Performance Council (TPC) still maintain their focus on reporting bare performance. Each TPC benchmark has to pass a list of dependability-related tests (to verify ACID properties), but not all benchmarks require measuring their performances. While TPC-E measures the recovery time of some system failures, TPC-H and TPC-C only require functional correctness of such recovery. Consequently, systems used in TPC benchmarks are tuned mostly for performance. In this paper we argue that nowadays systems should be tuned for a more comprehensive suite of dependability tests, and that a dependability metric should be part of TPC benchmark publications. The paper discusses WHY and HOW this can be achieved. Two approaches are introduced and discussed: augmenting each TPC benchmark in a customized way, by extending each specification individually; and pursuing a more unified approach, defining a generic specification that could be adjoined to any TPC benchmark.

  19. Performance benchmarking of core optical networking paradigms.

    PubMed

    Drakos, Andreas; Orphanoudakis, Theofanis G; Stavdas, Alexandros

    2012-07-30

    The sustainability of Future Internet critically depends on networking paradigms able to provide optimum and balanced performance over an extended set of efficiency and Quality of Service (QoS) metrics. In this work we benchmark the most established networking modes through appropriate performance metrics for three network topologies. The results demonstrate that the static reservation of WDM channels, as used in IP/WDM schemes, is severely limiting scalability, since it cannot efficiently adapt to the dynamic traffic fluctuations that are frequently observed in today's networks. Optical Burst Switching (OBS) schemes do provide dynamic resource reservation but their performance is compromised due to high burst loss. It is shown that the CANON (Clustered Architecture for Nodes in an Optical Network) architecture exploiting statistical multiplexing over a large scale core optical network and efficient grooming at appropriate granularity levels could be a viable alternative to existing static as well as dynamic wavelength reservation schemes. Through extensive simulation results we quantify performance gains and we show that CANON demonstrates the highest efficiency achieving both targets for statistical multiplexing gains and QoS guarantees.

  20. Some benchmark problems for computational aeroacoustics

    NASA Astrophysics Data System (ADS)

    Chapman, C. J.

    2004-02-01

    This paper presents analytical results for high-speed leading-edge noise which may be useful for benchmark testing of computational aeroacoustics codes. The source of the noise is a convected gust striking the leading edge of a wing or fan blade at arbitrary subsonic Mach number; the streamwise shape of the gust is top-hat, Gaussian, or sinusoidal, and the cross-stream shape is top-hat, Gaussian, or uniform. Detailed results are given for all nine combinations of shapes; six combinations give three-dimensional sound fields, and three give two-dimensional fields. The gust shapes depend on numerical parameters, such as frequency, rise time, and width, which may be varied arbitrarily in relation to aeroacoustic code parameters, such as time-step, grid size, and artificial viscosity. Hence it is possible to determine values of code parameters suitable for accurate calculation of a given acoustic feature, e.g., the impulsive sound field produced by a gust with sharp edges, or a full three-dimensional acoustic directivity pattern, or a complicated multi-lobed directivity. Another possibility is to check how accurately a code can determine the far acoustic field from nearfield data; a parameter here would be the distance from the leading edge at which the data are taken.