Science.gov

Sample records for america research benchmark

  1. Building America Research Benchmark Definition: Updated December 19, 2008

    SciTech Connect

    Hendron, R.

    2008-12-01

    To track progress toward aggressive multi-year whole-house energy savings goals of 40-70% and onsite power production of up to 30%, DOE's Residential Buildings Program and NREL developed the Building America Research Benchmark in consultation with the Building America industry teams.

  2. Building America Research Benchmark Definition, Updated December 2009

    SciTech Connect

    Hendron, Robert; Engebrecht, Cheryn

    2010-01-01

    To track progress toward aggressive multi-year, whole-house energy savings goals of 40%–70% and on-site power production of up to 30%, the U.S. Department of Energy (DOE) Residential Buildings Program and the National Renewable Energy Laboratory (NREL) developed the Building America (BA) Research Benchmark in consultation with the Building America industry teams.

  3. Building America Research Benchmark Definition: Updated December 20, 2007

    SciTech Connect

    Hendron, R.

    2008-01-01

    To track progress toward aggressive multi-year whole-house energy savings goals of 40-70% and onsite power production of up to 30%, DOE's Residential Buildings Program and NREL developed the Building America Research Benchmark in consultation with the Building America industry teams. The Benchmark is generally consistent with mid-1990s standard practice, as reflected in the Home Energy Rating System (HERS) Technical Guidelines (RESNET 2002), with additional definitions that allow the analyst to evaluate all residential end-uses, an extension of the traditional HERS rating approach that focuses on space conditioning and hot water. Unlike the reference homes used for HERS, EnergyStar, and most energy codes, the Benchmark represents typical construction at a fixed point in time so it can be used as the basis for Building America's multi-year energy savings goals without the complication of chasing a 'moving target'.

  4. Building America Research Benchmark Definition: Updated August 15, 2007

    SciTech Connect

    Hendron, R.

    2007-09-01

    To track progress toward aggressive multi-year whole-house energy savings goals of 40-70% and onsite power production of up to 30%, DOE's Residential Buildings Program and NREL developed the Building America Research Benchmark in consultation with the Building America industry teams. The Benchmark is generally consistent with mid-1990s standard practice, as reflected in the Home Energy Rating System (HERS) Technical Guidelines (RESNET 2002), with additional definitions that allow the analyst to evaluate all residential end-uses, an extension of the traditional HERS rating approach that focuses on space conditioning and hot water. Unlike the reference homes used for HERS, EnergyStar, and most energy codes, the Benchmark represents typical construction at a fixed point in time so it can be used as the basis for Building America's multi-year energy savings goals without the complication of chasing a 'moving target'.

  5. Building America Research Benchmark Definition, Updated December 15, 2006

    SciTech Connect

    Hendron, R.

    2007-01-01

    To track progress toward aggressive multi-year whole-house energy savings goals of 40-70% and onsite power production of up to 30%, DOE's Residential Buildings Program and NREL developed the Building America Research Benchmark in consultation with the Building America industry teams. The Benchmark is generally consistent with mid-1990s standard practice, as reflected in the Home Energy Rating System (HERS) Technical Guidelines (RESNET 2002), with additional definitions that allow the analyst to evaluate all residential end-uses, an extension of the traditional HERS rating approach that focuses on space conditioning and hot water. Unlike the reference homes used for HERS, EnergyStar, and most energy codes, the Benchmark represents typical construction at a fixed point in time so it can be used as the basis for Building America's multi-year energy savings goals without the complication of chasing a ''moving target''.

  6. Building America Research Benchmark Definition, Updated December 19, 2008

    SciTech Connect

    Hendron, R.

    2008-12-19

    To track progress toward aggressive multi-year whole-house energy savings goals of 40-70% and onsite power production of up to 30%, DOE's Residential Buildings Program and NREL developed the Building America Research Benchmark in consultation with the Bui

  7. Research on computer systems benchmarking

    NASA Technical Reports Server (NTRS)

    Smith, Alan Jay (Principal Investigator)

    1996-01-01

    This grant addresses the topic of research on computer systems benchmarking and is more generally concerned with performance issues in computer systems. This report reviews work in those areas during the period of NASA support under this grant. The bulk of the work performed concerned benchmarking and analysis of CPUs, compilers, caches, and benchmark programs. The first part of this work concerned the issue of benchmark performance prediction. A new approach to benchmarking and machine characterization was reported, using a machine characterizer that measures the performance of a given system in terms of a Fortran abstract machine. Another report focused on analyzing compiler performance. The performance impact of optimization in the context of our methodology for CPU performance characterization was based on the abstract machine model. Benchmark programs are analyzed in another paper. A machine-independent model of program execution was developed to characterize both machine performance and program execution. By merging these machine and program characterizations, execution time can be estimated for arbitrary machine/program combinations. The work was continued into the domain of parallel and vector machines, including the issue of caches in vector processors and multiprocessors. All of the afore-mentioned accomplishments are more specifically summarized in this report, as well as those smaller in magnitude supported by this grant.

  8. Benchmarking Competitiveness: Is America's Technological Hegemony Waning?

    NASA Astrophysics Data System (ADS)

    Lubell, Michael S.

    2006-03-01

    For more than half a century, by almost every standard, the United States has been the world's leader in scientific discovery, innovation and technological competitiveness. To a large degree, that dominant position stemmed from the circumstances our nation inherited at the conclusion of the World War Two: we were, in effect, the only major nation left standing that did not have to repair serious war damage. And we found ourselves with an extraordinary science and technology base that we had developed for military purposes. We had the laboratories -- industrial, academic and government -- as well as the scientific and engineering personnel -- many of them immigrants who had escaped from war-time Europe. What remained was to convert the wartime machinery into peacetime uses. We adopted private and public policies that accomplished the transition remarkably well, and we have prospered ever since. Our higher education system, our protection of intellectual property rights, our venture capital system, our entrepreneurial culture and our willingness to commit government funds for the support of science and engineering have been key components to our success. But recent competitiveness benchmarks suggest that our dominance is waning rapidly, in part because other nations have begun to emulate our successful model, in part because globalization has ``flattened'' the world and in part because we have been reluctant to pursue the public policies that are necessary to ensure our leadership. We will examine these benchmarks and explore the policy changes that are needed to keep our nation's science and technology enterprise vibrant and our economic growth on an upward trajectory.

  9. Pharmaceutical Research and Manufacturers of America

    MedlinePlus

    ... The New Era of Medicine SHARE THIS The Pharmaceutical Research and Manufacturers of America, PhRMA, represents the ... PhRMA Privacy Policy Terms of Service Site Map Pharmaceutical Research and Manufacturers of America® 950 F Street, ...

  10. 2010 Recruiting Benchmarks Survey. Research Brief

    ERIC Educational Resources Information Center

    National Association of Colleges and Employers (NJ1), 2010

    2010-01-01

    The National Association of Colleges and Employers conducted its annual survey of employer members from June 15, 2010 to August 15, 2010, to benchmark data relevant to college recruiting. From a base of 861 employers holding organizational membership, there were 268 responses for a response rate of 31 percent. Following are some of the major…

  11. Allocating scarce financial resources for HIV treatment: benchmarking prices of antiretroviral medicines in Latin America.

    PubMed

    Wirtz, Veronika J; Santa-Ana-Tellez, Yared; Trout, Clinton H; Kaplan, Warren A

    2012-12-01

    Public sector price analyses of antiretroviral (ARV) medicines can provide relevant information to detect ARV procurement procedures that do not obtain competitive market prices. Price benchmarks provide a useful tool for programme managers and policy makers to support such planning and policy measures. The aim of the study was to develop regional and global price benchmarks which can be used to analyse public-sector price variability of ARVs in low- and middle-income countries using the procurement prices of Latin America and the Caribbean (LAC) countries in 2008 as an example. We used the Global Price Reporting Mechanism (GPRM) data base, provided by the World Health Organization (WHO), for 13 LAC countries' ARV procurements to analyse the procurement prices of four first-line and three second-line ARV combinations in 2008. First, a cross-sectional analysis was conducted to compare ARV combination prices. Second, four different price 'benchmarks' were created and we estimated the additional number of patients who could have been treated in each country if the ARV combinations studied were purchased at the various reference ('benchmark') prices. Large price variations exist for first- and second-line ARV combinations between countries in the LAC region. Most countries in the LAC region could be treating between 1.17 and 3.8 times more patients if procurement prices were closer to the lowest regional generic price. For all second-line combinations, a price closer to the lowest regional innovator prices or to the global median transaction price for lower-middle-income countries would also result in treating up to nearly five times more patients. Some rational allocation of financial resources due, in part, to price benchmarking and careful planning by policy makers and programme managers can assist a country in negotiating lower ARV procurement prices and should form part of a sustainable procurement policy.

  12. High Performance Homes That Use 50% Less Energy Than the DOE Building America Benchmark Building

    SciTech Connect

    Christian, J.

    2011-01-01

    This document describes lessons learned from designing, building, and monitoring five affordable, energy-efficient test houses in a single development in the Tennessee Valley Authority (TVA) service area. This work was done through a collaboration of Habitat for Humanity Loudon County, the US Department of Energy (DOE), TVA, and Oak Ridge National Laboratory (ORNL).The houses were designed by a team led by ORNL and were constructed by Habitat's volunteers in Lenoir City, Tennessee. ZEH5, a two-story house and the last of the five test houses to be built, provided an excellent model for conducting research on affordable high-performance houses. The impressively low energy bills for this house have generated considerable interest from builders and homeowners around the country who wanted a similar home design that could be adapted to different climates. Because a design developed without the project constraints of ZEH5 would have more appeal for the mass market, plans for two houses were developed from ZEH5: a one-story design (ZEH6) and a two-story design (ZEH7). This report focuses on ZEH6, identical to ZEH5 except that the geothermal heat pump is replaced with a SEER 16 air source unit (like that used in ZEH4). The report also contains plans for the ZEH6 house. ZEH5 and ZEH6 both use 50% less energy than the DOE Building America protocol for energyefficient buildings. ZEH5 is a 4 bedroom, 2.5 bath, 2632 ft2 house with a home energy rating system (HERS) index of 43, which qualifies it for federal energy-efficiency incentives (a HERS rating of 0 is a zero-energy house, and a conventional new house would have a HERS rating of 100). This report is intended to help builders and homeowners build similar high-performance houses. Detailed specifications for the envelope and the equipment used in ZEH5 are compared with the Building America Benchmark building, and detailed drawings, specifications, and lessons learned in the construction and analysis of data gleaned from 94

  13. Building America Research-to-Market Plan

    SciTech Connect

    Werling, Eric

    2015-11-01

    This report presents the Building America Research-to-Market Plan (Plan), including the integrated Building America Technology-to-Market Roadmaps (Roadmaps) that will guide Building America’s research, development, and deployment (RD&D) activities over the coming years. The Plan and Roadmaps will be updated as necessary to adapt to research findings and evolving stakeholder needs, and they will reflect input from DOE and stakeholders.

  14. Research Universities and the Future of America

    ERIC Educational Resources Information Center

    Duderstadt, James J.

    2012-01-01

    The crucial importance of the research university as a key asset in achieving economic prosperity and security is widely understood, as evidenced by the efforts that nations around the globe are making to create and sustain institutions of world-class quality. Yet, while America's research universities remain the strongest in the world, they are…

  15. 2013 Building America Research Planning Meeting Summary

    SciTech Connect

    Metzger, C.; Hunt, S.

    2014-02-01

    The Building America Research Planning Meeting was held October 28-30, 2013, in Washington, DC. This meeting provides one opportunity each year for the research teams, national laboratories and Department of Energy (DOE) managers to meet in person to share the most pertinent information and collaboration updates. This report documents the presentations, highlights key program updates, and outlines next steps for the program.

  16. 2013 Building America Research Planning Meeting Summary

    SciTech Connect

    Metzger, C. E.; Hunt, S.

    2014-02-01

    The Building America (BA) Research Planning Meeting was held October 28-30, 2013, in Washington, DC. This meeting provides one opportunity each year for the research teams, national laboratories and Department of Energy (DOE) managers to meet in person to share the most pertinent information and collaboration updates. This report documents the presentations, highlights key program updates, and outlines next steps for the program.

  17. Obsidian provenance research in the Americas.

    PubMed

    Glascock, Michael D

    2002-08-01

    The characterization of archaeological materials to support provenance research has grown rapidly over the past few decades. Volcanic obsidian has several unique properties that make it the ideal archaeological material for studying prehistoric trade and exchange. This Account describes our laboratory's development of a systematic methodology for the characterization of obsidian sources and artifacts from Mesoamerica and other regions of North and South America in support of archaeological research.

  18. Participatory Research in North America; A Perspective on Participatory Research in Latin America; Participatory Research in Southern Europe.

    ERIC Educational Resources Information Center

    Gaventa, John; And Others

    1988-01-01

    The authors present perspectives on the employment of participatory research techniques in three areas: (1) North America (Gaventa); (2) Latin America (de Souza); and (3) Southern Europe (Orefice). Discussion focuses on participatory research strategies for popular groups, purposes and considerations regarding participatory research, and the role…

  19. Mobilisation for public engagement: Benchmarking the practices of research institutes.

    PubMed

    Entradas, Marta; Bauer, Martin M

    2016-03-07

    Studies on scientists' practices of public engagement have pointed to variations between disciplines. If variations at the individual level are reflected at the institutional level, then research institutes in Social Sciences (and Humanities) should perform higher in public engagement and be more involved in dialogue with the public. Using a nearly complete sample of research institutes in Portugal 2014 (n = 234, 61% response rate), we investigate how public engagement varies in intensity, type of activities and target audiences across scientific areas. Three benchmark findings emerge. First, the Social Sciences and the Humanities profile differently in public engagement highlighting the importance of distinguishing between these two scientific areas often conflated in public engagement studies. Second, the Social Sciences overall perform more public engagement activities, but the Natural Sciences mobilise more effort for public engagement. Third, while the Social Sciences play a greater role in civic public engagement, the Natural Sciences are more likely to perform educational activities. Finally, this study shows that the overall size of research institutes, available public engagement funding and public engagement staffing make a difference in institutes' public engagement.

  20. Building America Residential System Research Results: Achieving 30% Whole House Energy Savings Level in Cold Climates

    SciTech Connect

    Building Industry Research Alliance; Building Science Consortium; Consortium for Advanced Residential Buildings; Florida Solar Energy Center; IBACOS; National Renewable Energy Laboratory

    2006-08-01

    The Building America program conducts the system research required to reduce risks associated with the design and construction of homes that use an average of 30% to 90% less total energy for all residential energy uses than the Building America Research Benchmark, including research on homes that will use zero net energy on annual basis. To measure the program's progress, annual research milestones have been established for five major climate regions in the United States. The system research activities required to reach each milestone take from 3 to 5 years to complete and include research in individual test houses, studies in pre-production prototypes, and research studies with lead builders that provide early examples that the specified energy savings level can be successfully achieved on a production basis. This report summarizes research results for the 30% energy savings level and demonstrates that lead builders can successfully provide 30% homes in Cold Climates on a cost-neutral basis.

  1. Price control regulation in North America: role of indexing and benchmarking

    SciTech Connect

    Lowry, Mark N.; Getachew, Lullit

    2009-01-15

    Price cap plans in regulation are designed to mimic competition. The authors present the index logic that underpins this rationale in North American plans and offer two statistical approaches - indexing and benchmarking - that play a role in determining industry and/or national productivity trends and economy and/or industry input price trends that are used to track the unit cost of the industry. (author)

  2. Dengue Research Opportunities in the Americas

    PubMed Central

    Laughlin, Catherine A.; Morens, David M.; Cassetti, M. Cristina; Costero-Saint Denis, Adriana; San Martin, Jose-Luis; Whitehead, Stephen S.; Fauci, Anthony S.

    2012-01-01

    Dengue is a systemic arthropod-borne viral disease of major global public health importance. At least 2.5 billion people who live in areas of the world where dengue occurs are at risk of developing dengue fever (DF) and its severe complications, dengue hemorrhagic fever (DHF) and dengue shock syndrome (DSS). Repeated reemergences of dengue in sudden explosive epidemics often cause public alarm and seriously stress healthcare systems. The control of dengue is further challenged by the lack of effective therapies, vaccines, and point-of-care diagnostics. Despite years of study, even its pathogenic mechanisms are poorly understood. This article discusses recent advances in dengue research and identifies challenging gaps in research on dengue clinical evaluation, diagnostics, epidemiology, immunology, therapeutics, vaccinology/clinical trials research, vector biology, and vector ecology. Although dengue is a major global tropical pathogen, epidemiologic and disease control considerations in this article emphasize dengue in the Americas. PMID:22782946

  3. Educational Research in Latin America: Review and Perspectives.

    ERIC Educational Resources Information Center

    Akkari, Abdeljalil; Perez, Soledad

    1998-01-01

    Describes the historical context of educational research in Latin America and focuses on the theoretical frameworks applied to educational research in the area. Identifies the primary institutions involved in educational research in Latin America and suggests priorities for future research. (SLD)

  4. Application of research and information to human resources policies: regional goals for the Americas.

    PubMed

    Mandelli, Marcos; Rigoli, Felix

    2015-12-01

    Objective Report experiences involving the use of research and information systems to support national human resources policies through benchmarking between different countries, with comparisons over time and between similar countries or regions. Method In 2007, the Pan American Health Organization (PAHO) promoted a set of goals for all the countries in the Americas to improve the situation of health human resources, using a uniform methodology and research process carried out by Observatories of Human Resources. Results The analysis focused on the progress made in relation to the main challenges in the Southern Cone countries, with a special emphasis on Brazil, noting improvements in the distribution of professionals in the regions. Conclusion These experiences showed how research and the use of information systems can stimulate the expansion of good practices in the training, retention and development of the health workforce in the Americas.

  5. Cardiovascular disease research in Latin America: A comparative bibliometric analysis

    PubMed Central

    Jahangir, Eiman; Comandé, Daniel; Rubinstein, Adolfo

    2011-01-01

    AIM: To investigate the number of publications in cardiovascular disease (CVD) in Latin America and the Caribbean over the last decade. METHODS: We performed a bibliometric analysis in PubMed from 2001 to 2010 for Latin America and the Caribbean, the United States, Canada, Europe, China, and India. RESULTS: Latin America published 4% of articles compared with 26% from the United States/Canada and 42% from Europe. In CVD, Latin America published 4% of articles vs 23% from the United States/Canada and 40% from Europe. The number of publications in CVD in Latin America increased from 41 in 2001 to 726 in 2010. CONCLUSION: Latin America, while publishing more articles than previously, lags behind developed countries. Further advances in research infrastructure are necessary to develop prevention strategies for this region. PMID:22216374

  6. An Analysis of Academic Research Libraries Assessment Data: A Look at Professional Models and Benchmarking Data

    ERIC Educational Resources Information Center

    Lewin, Heather S.; Passonneau, Sarah M.

    2012-01-01

    This research provides the first review of publicly available assessment information found on Association of Research Libraries (ARL) members' websites. After providing an overarching review of benchmarking assessment data, and of professionally recommended assessment models, this paper examines if libraries contextualized their assessment…

  7. Building America Residential System Research Results: Achieving 30% Whole House Energy Savings Level in Marine Climates; January 2006 - December 2006

    SciTech Connect

    Building America Industrialized Housing Partnership; Building Industry Research Alliance; Building Science Consortium; Consortium for Advanced Residential Buildings; Davis Energy Group; IBACOS; National Association of Home Builders Research Center; National Renewable Energy Laboratory

    2006-12-01

    The Building America program conducts the system research required to reduce risks associated with the design and construction of homes that use an average of 30% to 90% less total energy for all residential energy uses than the Building America Research Benchmark, including research on homes that will use zero net energy on annual basis. To measure the program's progress, annual research milestones have been established for five major climate regions in the United States. The system research activities required to reach each milestone take from 3 to 5 years to complete and include research in individual test houses, studies in pre-production prototypes, and research studies with lead builders that provide early examples that the specified energy savings level can be successfully achieved on a production basis. This report summarizes research results for the 30% energy savings level and demonstrates that lead builders can successfully provide 30% homes in the Marine Climate Region on a cost neutral basis.

  8. Barriers to conducting clinical research in reproductive medicine: Latin America.

    PubMed

    Zegers-Hochschild, Fernando

    2011-10-01

    Societies in Latin America are not scientifically driven and therefore, the allocation of human and economic resources to research is meager, as a reflection of this as well as other cultural and economic realities.

  9. Work Readiness Standards and Benchmarks: The Key to Differentiating America's Workforce and Regaining Global Competitiveness

    ERIC Educational Resources Information Center

    Clark, Hope

    2013-01-01

    In this report, ACT presents a definition of "work readiness" along with empirically driven ACT Work Readiness Standards and Benchmarks. The introduction of standards and benchmarks for workplace success provides a more complete picture of the factors that are important in establishing readiness for success throughout a lifetime. While…

  10. Utilising Benchmarking to Inform Decision-Making at the Institutional Level: A Research-Informed Process

    ERIC Educational Resources Information Center

    Booth, Sara

    2013-01-01

    Benchmarking has traditionally been viewed as a way to compare data only; however, its utilisation as a more investigative, research-informed process to add rigor to decision-making processes at the institutional level is gaining momentum in the higher education sector. Indeed, with recent changes in the Australian quality environment from the…

  11. US-LA CRN Clinical Cancer Research in Latin America

    Cancer.gov

    The United States – Latin America Cancer Research Network (US-LA CRN) convened its Annual Meeting, in coordination with the Ministry of Health of Chile to discuss the Network’s first multilateral clinical research study: Molecular Profiling of Breast Cancer (MPBC).

  12. IAEA coordinated research projects on core physics benchmarks for high temperature gas-cooled reactors

    SciTech Connect

    Methnani, M.

    2006-07-01

    High-temperature Gas-Cooled Reactor (HTGR) designs present special computational challenges related to their core physics characteristics, in particular neutron streaming, double heterogeneities, impurities and the random distribution of coated fuel particles in the graphite matrix. In recent years, two consecutive IAEA Coordinated Research Projects (CRP 1 and CRP 5) have focused on code-to-code and code-to-experiment comparisons of representative benchmarks run by several participating international institutes. While the PROTEUS critical HTR experiments provided the test data reference for CRP-1, the more recent CRP-5 data has been made available by the HTTR, HTR-10 and ASTRA test facilities. Other benchmark cases are being considered for the GT-MHR and PBMR core designs. This paper overviews the scope and some sample results of both coordinated research projects. (authors)

  13. Mental health epidemiological research in South America: recent findings

    PubMed Central

    Silva de Lima, Maurício; Garcia de Oliveira Soares, Bernardo; de Jesus Mari, Jair

    2004-01-01

    This paper aims to review the recent mental health epidemiological research conducted in South America. The Latin American and the Caribbean (LILACS) database was searched from 1999 to 2003 using a specific strategy for identification of cohort, case-control and cross-sectional population-based studies in South America. The authors screened references and identified relevant studies. Further studies were obtained contacting local experts in epidemiology. 140 references were identified, and 12 studies were selected. Most selected studies explored the prevalence and risk factors for common mental disorders, and several of them used sophisticated methods of sample selection and analysis. There is a need for improving the quality of psychiatric journals in Latin America, and for increasing the distribution and access to research data. Regionally relevant problems such as violence and substance abuse should be considered in designing future investigations in this area. PMID:16633474

  14. Language Teacher Research in the Americas

    ERIC Educational Resources Information Center

    McGarrell, Hedy M., Ed.

    2007-01-01

    This volume in the Language Teacher Research Series (Thomas S. C. Farrell, series editor) presents research conducted by language teachers at all levels, from high school English teachers to English language teacher educators, reflecting on their practices. The countries represented in this book are Brazil, Canada, Colombia, Costa Rica, Ecuador,…

  15. Classroom Research and Child and Adolescent Development in South America

    ERIC Educational Resources Information Center

    Preiss, David Daniel; Calcagni, Elisa; Grau, Valeska

    2015-01-01

    The article reviews recent classroom research developed in South America related to child and adolescent development. We review work about three themes: ethnicity, school climate and violence, and the learning process. The few studies found on ethnicity and classroom experiences told a story of invisibility, if not exclusion and discrimination.…

  16. Sources of Inequities in Rural America: Implications for Research.

    ERIC Educational Resources Information Center

    Fujimoto, Isao; Zone, Martin

    The paper identifies the basic factors affecting rural development and the social consequences of rural policies and structural changes in agriculture; it also suggests research areas relating some of these factors to what is happening in America's rural communities. Data sources such as congressional hearings, rural sociologists' critiques,…

  17. Social Science Research Serving Rural America.

    ERIC Educational Resources Information Center

    Miron, Mary, Ed.

    This collection of articles provides an overview of some of the recent social science research projects performed by state agricultural experiment stations. The examples highlight social science's contribution to problem-solving in rural business, industry, farming, communities, government, education, and families. The following programs are…

  18. Black raspberry phytochemical research in North America

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Our research group has focused on developing black raspberries with improved disease resistance and phytochemical traits over the last seven years. Recent interest in the rich color of black raspberries, and their historical use as an effective dye, derive from their anthocyanin composition and cont...

  19. Managing for Results in America's Great City Schools. A Report of the Performance Measurement and Benchmarking Project

    ERIC Educational Resources Information Center

    Council of the Great City Schools, 2012

    2012-01-01

    "Managing for Results in America's Great City Schools, 2012" is presented by the Council of the Great City Schools to its members and the public. The purpose of the project was and is to develop performance measures that can improve the business operations of urban public school districts nationwide. This year's report includes data from 61 of the…

  20. Space, geophysical research related to Latin America - Part 2

    NASA Astrophysics Data System (ADS)

    Mendoza, Blanca; Shea, M. A.

    2016-11-01

    For the last 25 years, every two to three years the Conferencia Latinoamericana de Geofísica Espacial (COLAGE) is held in one of the Latin American countries for the purpose of promoting scientific exchange among scientists of the region and to encourage continued research that is unique to this area of the world. At the more recent conference, the community realized that many individuals both within and outside Latin America have contributed greatly to the understanding of the space sciences in this area of the world. It was therefore decided to assemble a Special Issue Space and Geophysical Physics related to Latin America, presenting recent results and where submissions would be accepted from the world wide community of scientists involved in research appropriate to Latin America. Because of the large number of submissions, these papers have been printed in two separate issues. The first issue was published in Advances in Space Research, Vol. 57, number 6 and contained 15 papers. This is the second issue and contains 25 additional papers. These papers show the wide variety of research, both theoretical and applied, that is currently being developed or related to space and geophysical sciences in the Sub-Continent.

  1. Evolutionary Developmental Biology (Evo-Devo) Research in Latin America.

    PubMed

    Marcellini, Sylvain; González, Favio; Sarrazin, Andres F; Pabón-Mora, Natalia; Benítez, Mariana; Piñeyro-Nelson, Alma; Rezende, Gustavo L; Maldonado, Ernesto; Schneider, Patricia Neiva; Grizante, Mariana B; Da Fonseca, Rodrigo Nunes; Vergara-Silva, Francisco; Suaza-Gaviria, Vanessa; Zumajo-Cardona, Cecilia; Zattara, Eduardo E; Casasa, Sofia; Suárez-Baron, Harold; Brown, Federico D

    2017-01-01

    Famous for its blind cavefish and Darwin's finches, Latin America is home to some of the richest biodiversity hotspots of our planet. The Latin American fauna and flora inspired and captivated naturalists from the nineteenth and twentieth centuries, including such notable pioneers such as Fritz Müller, Florentino Ameghino, and Léon Croizat who made a significant contribution to the study of embryology and evolutionary thinking. But, what are the historical and present contributions of the Latin American scientific community to Evo-Devo? Here, we provide the first comprehensive overview of the Evo-Devo laboratories based in Latin America and describe current lines of research based on endemic species, focusing on body plans and patterning, systematics, physiology, computational modeling approaches, ecology, and domestication. Literature searches reveal that Evo-Devo in Latin America is still in its early days; while showing encouraging indicators of productivity, it has not stabilized yet, because it relies on few and sparsely distributed laboratories. Coping with the rapid changes in national scientific policies and contributing to solve social and health issues specific to each region are among the main challenges faced by Latin American researchers. The 2015 inaugural meeting of the Pan-American Society for Evolutionary Developmental Biology played a pivotal role in bringing together Latin American researchers eager to initiate and consolidate regional and worldwide collaborative networks. Such networks will undoubtedly advance research on the extremely high genetic and phenotypic biodiversity of Latin America, bound to be an almost infinite source of amazement and fascinating findings for the Evo-Devo community.

  2. [Advancing public mental health research in Latin America].

    PubMed

    Susser, Ezra

    2015-01-01

    This special issue on Mental Health of the Journal of the School of Medicine, represents a significant contribution to the advance of public mental health research and training in Latin America. The editors (as well as the authors) deserve much credit for having conceived and implemented the joint publication of these papers. In this brief introduction, I draw attention to four ways in which their effort is likely to accelerate progress in this field.

  3. Benchmark and Framework for Encouraging Research on Multi-Threaded Testing Tools

    NASA Technical Reports Server (NTRS)

    Havelund, Klaus; Stoller, Scott D.; Ur, Shmuel

    2003-01-01

    A problem that has been getting prominence in testing is that of looking for intermittent bugs. Multi-threaded code is becoming very common, mostly on the server side. As there is no silver bullet solution, research focuses on a variety of partial solutions. In this paper (invited by PADTAD 2003) we outline a proposed project to facilitate research. The project goals are as follows. The first goal is to create a benchmark that can be used to evaluate different solutions. The benchmark, apart from containing programs with documented bugs, will include other artifacts, such as traces, that are useful for evaluating some of the technologies. The second goal is to create a set of tools with open API s that can be used to check ideas without building a large system. For example an instrumentor will be available, that could be used to test temporal noise making heuristics. The third goal is to create a focus for the research in this area around which a community of people who try to solve similar problems with different techniques, could congregate.

  4. Reducing maternal mortality: better monitoring, indicators and benchmarks needed to improve emergency obstetric care. Research summary for policymakers.

    PubMed

    Collender, Guy; Gabrysch, Sabine; Campbell, Oona M R

    2012-06-01

    Several limitations of emergency obstetric care (EmOC) indicators and benchmarks are analysed in this short paper, which synthesises recent research on this topic. A comparison between Sri Lanka and Zambia is used to highlight the inconsistencies and shortcomings in current methods of monitoring EmOC. Recommendations are made to improve the usefulness and accuracy of EmOC indicators and benchmarks in the future.

  5. Using benchmarking research to locate agency best practices for African American clients.

    PubMed

    Kondrat, Mary Ellen; Greene, Gilbert J; Winbush, Greta B

    2002-07-01

    Using a collective case study design with benchmarking features, research reported here sought to locate differences in agency practices between public mental health agencies in which African American clients were doing comparatively better on specific proxy outcomes related to community tenure, and agencies with less success on those same variables. A panel of experts from the Ohio Department of Mental Health matched four agencies on per capita spending, percentage of African American clients, and urban-intensive setting. The panel also differentiated agencies on the basis of racial group comparisons for a number of proxy variables related to successful community tenure. Two agencies had a record of success with this client group (benchmark agencies); and two were less successful based on the selected criteria (comparison agencies). Findings indicated that when service elements explicitly related to culture were similar across study sites, the characteristics that did appear to make a difference were aspects of organizational culture. Implications for administration practice and further research are discussed.

  6. The Safety Attitudes Questionnaire: psychometric properties, benchmarking data, and emerging research

    PubMed Central

    Sexton, John B; Helmreich, Robert L; Neilands, Torsten B; Rowan, Kathy; Vella, Keryn; Boyden, James; Roberts, Peter R; Thomas, Eric J

    2006-01-01

    Background There is widespread interest in measuring healthcare provider attitudes about issues relevant to patient safety (often called safety climate or safety culture). Here we report the psychometric properties, establish benchmarking data, and discuss emerging areas of research with the University of Texas Safety Attitudes Questionnaire. Methods Six cross-sectional surveys of health care providers (n = 10,843) in 203 clinical areas (including critical care units, operating rooms, inpatient settings, and ambulatory clinics) in three countries (USA, UK, New Zealand). Multilevel factor analyses yielded results at the clinical area level and the respondent nested within clinical area level. We report scale reliability, floor/ceiling effects, item factor loadings, inter-factor correlations, and percentage of respondents who agree with each item and scale. Results A six factor model of provider attitudes fit to the data at both the clinical area and respondent nested within clinical area levels. The factors were: Teamwork Climate, Safety Climate, Perceptions of Management, Job Satisfaction, Working Conditions, and Stress Recognition. Scale reliability was 0.9. Provider attitudes varied greatly both within and among organizations. Results are presented to allow benchmarking among organizations and emerging research is discussed. Conclusion The Safety Attitudes Questionnaire demonstrated good psychometric properties. Healthcare organizations can use the survey to measure caregiver attitudes about six patient safety-related domains, to compare themselves with other organizations, to prompt interventions to improve safety attitudes and to measure the effectiveness of these interventions. PMID:16584553

  7. Building America Residential System Research Results: Achieving 30% Whole House Energy Savings Level in Mixed-Humid Climates; January 2006 - December 2006

    SciTech Connect

    Building America Industrialized Housing Partnership; Building Industry Research Alliance; Building Science Consortium; Consortium for Advanced Residential Buildings; Davis Energy Group; IBACOS; National Association of Home Builders Research Center; National Renewable Energy Laboratory

    2006-12-01

    The Building America program conducts the system research required to reduce risks associated with the design and construction of homes that use an average of 30% to 90% less total energy for all residential energy uses than the Building America Research Benchmark, including research on homes that will use zero net energy on annual basis. To measure the program's progress, annual research milestones have been established for five major climate regions in the United States. The system research activities required to reach each milestone take from 3 to 5 years to complete and include research in individual test houses, studies in pre-production prototypes, and research studies with lead builders that provide early examples that the specified energy savings level can be successfully achieved on a production basis. This report summarizes research results for the 30% energy savings level and demonstrates that lead builders can successfully provide 30% homes in the Mixed-Humid Climate Region on a cost-neutral basis.

  8. Benchmarking and Its Relevance to the Library and Information Sector. Interim Findings of "Best Practice Benchmarking in the Library and Information Sector," a British Library Research and Development Department Project.

    ERIC Educational Resources Information Center

    Kinnell, Margaret; Garrod, Penny

    This British Library Research and Development Department study assesses current activities and attitudes toward quality management in library and information services (LIS) in the academic sector as well as the commercial/industrial sector. Definitions and types of benchmarking are described, and the relevance of benchmarking to LIS is evaluated.…

  9. Research, management, and status of the osprey in North America

    USGS Publications Warehouse

    Henny, C.J.; Chancellor, R.D.

    1977-01-01

    Osprey populations were studied throughout North America during the last decade as a result of dramatic declines reported along the North Atlantic Coast in the1950s and early 1960s. Researchers used banding, localized studies, aerial surveys, and pesticide analyses to identify factors influencing regional populations. Declining populations showed extremely poor production, contamination by environmental pollutants (including DDT and its metabolites, dieldrin, and polychlorinated biphenyls) and thin-shelled eggs. Following the reduced use and eventual ban of DDT and dieldrin, productivity began to improve. Improvement in affected populations, mainly those along the Atlantic Coast and in the Great Lakes region, began in the late 1960s and is continuing in the 1970s. Most populations in the South Atlantic region, in Western North America, and in Florida and the Gulf of California appeared to be producing at normal or near-normal rates in the late 1960s and early 1970s. Although some of the most severely affected populations are still not producing at normal rates, the pattern of improvement and an increase in management activities, including provision of nesting platforms and establishment of Osprey management zones, allow cautious optimism about the future of the species in North America. With its low recruitment potential, however, recovery will be slow.

  10. Medical Universities Educational and Research Online Services: Benchmarking Universities’ Website Towards E-Government

    PubMed Central

    Farzandipour, Mehrdad; Meidani, Zahra

    2014-01-01

    Background: Websites as one of the initial steps towards an e-government adoption do facilitate delivery of online and customer-oriented services. In this study we intended to investigate the role of the websites of medical universities in providing educational and research services following the E-government maturity model in the Iranian universities. Methods: This descriptive and cross- sectional study was conducted through content analysis and benchmarking the websites in 2012. The research population included the entire medical university website (37). Delivery of educational and research services through these university websites including information, interaction, transaction, and Integration were investigated using a checklist. The data were then analyzed by means of descriptive statistics and using SPSS software. Results: Level of educational and research services by websites of the medical universities type I and II was evaluated medium as 1.99 and 1.89, respectively. All the universities gained a mean score of 1 out of 3 in terms of integration of educational and research services. Conclusions: Results of the study indicated that Iranian universities have passed information and interaction stages, but they have not made much progress in transaction and integration stages. Failure to adapt to e-government in Iranian medical universities in which limiting factors such as users’ e-literacy, access to the internet and ICT infrastructure are not so crucial as in other organizations, suggest that e-government realization goes beyond technical challenges. PMID:25132713

  11. Benchmarking the scientific output of industrial wastewater research in Arab world by utilizing bibliometric techniques.

    PubMed

    Zyoud, Shaher H; Al-Rawajfeh, Aiman E; Shaheen, Hafez Q; Fuchs-Hanusch, Daniela

    2016-05-01

    Rapid population growth, worsening of the climate, and severity of freshwater scarcity are global challenges. In Arab world countries, where water resources are becoming increasingly scarce, the recycling of industrial wastewater could improve the efficiency of freshwater use. The benchmarking of scientific output of industrial wastewater research in the Arab world is an initiative that could support in shaping up and improving future research activities. This study assesses the scientific output of industrial wastewater research in the Arab world. A total of 2032 documents related to industrial wastewater were retrieved from 152 journals indexed in the Scopus databases; this represents 3.6 % of the global research output. The h-index of the retrieved documents was 70. The total number of citations, at the time of data analysis, was 34,296 with an average citation of 16.88 per document. Egypt, with a total publications of 655 (32.2 %), was ranked the first among the Arab countries followed by Saudi Arabia 300 (14.7 %) and Tunisia 297 (14.6 %). Egypt also had the highest h-index, assumed with Saudi Arabia, the first place in collaboration with other countries. Seven hundred fifteen (35.2 %) documents with 66 countries in Arab/non-Arab country collaborations were identified. Arab researchers collaborated mostly with researchers from France 239 (11.7 %), followed by the USA 127 (6.2 %). The top active journal was Desalination 126 (6.2 %), and the most productive institution was the National Research Center, Egypt 169 (8.3 %), followed by the King Abdul-Aziz University, Saudi Arabia 75 (3.7 %). Environmental Science was the most prevalent field of interest 930 (45.8 %). Despite the promising indicators, there is a need to close the gap in research between the Arab world and the other nations. Optimizing the investments and developing regional experiences are key factors to promote the scientific research.

  12. Research on Child and Adolescent Development and Public Policy in Latin America

    ERIC Educational Resources Information Center

    Narea, Marigen

    2016-01-01

    This commentary discusses the implication of child and adolescent development research for public policy in Latin America. As illustrated by the articles in this special issue, even though the research of child and adolescent development in Latin America is making significant progress, still more research is needed. Developmental research in the…

  13. Automotive Mg Research and Development in North America

    SciTech Connect

    Carpenter, Joseph A.; Jackman, Jennifer; Li, Naiyi; Osborne, Richard J.; Powell, Bob R.; Sklad, Philip S

    2006-01-01

    Expanding world economic prosperity and probable peaking of conventional petroleum production in the coming decades require efforts to increase the efficiency of, and the development of alternatives to, petroleum-based fuels used in automotive transportation. North America has been aggressively pursuing both approaches for over ten years. Mainly as a result of lower prices due to global sourcing, magnesium has recently emerged as a serious candidate for lightweighting, and thus increasing the fuel efficiency of, automotive transportation. Automotive vehicles produced in North America currently use more Mg than vehicles produced elsewhere in the world, but the amounts per vehicle are very small in comparison to other materials such as steel, aluminum and plastics. The reasons, besides price, are primarily a less-developed state of technology for Mg in automotive transportation applications and lack of familiarity by the vehicle manufacturers with the material. This paper reviews some publicly-known, recent, present and future North American research and development activities in Mg for automotive applications.

  14. The United States of America and scientific research.

    PubMed

    Hather, Gregory J; Haynes, Winston; Higdon, Roger; Kolker, Natali; Stewart, Elizabeth A; Arzberger, Peter; Chain, Patrick; Field, Dawn; Franza, B Robert; Lin, Biaoyang; Meyer, Folker; Ozdemir, Vural; Smith, Charles V; van Belle, Gerald; Wooley, John; Kolker, Eugene

    2010-08-16

    To gauge the current commitment to scientific research in the United States of America (US), we compared federal research funding (FRF) with the US gross domestic product (GDP) and industry research spending during the past six decades. In order to address the recent globalization of scientific research, we also focused on four key indicators of research activities: research and development (R&D) funding, total science and engineering doctoral degrees, patents, and scientific publications. We compared these indicators across three major population and economic regions: the US, the European Union (EU) and the People's Republic of China (China) over the past decade. We discovered a number of interesting trends with direct relevance for science policy. The level of US FRF has varied between 0.2% and 0.6% of the GDP during the last six decades. Since the 1960s, the US FRF contribution has fallen from twice that of industrial research funding to roughly equal. Also, in the last two decades, the portion of the US government R&D spending devoted to research has increased. Although well below the US and the EU in overall funding, the current growth rate for R&D funding in China greatly exceeds that of both. Finally, the EU currently produces more science and engineering doctoral graduates and scientific publications than the US in absolute terms, but not per capita. This study's aim is to facilitate a serious discussion of key questions by the research community and federal policy makers. In particular, our results raise two questions with respect to: a) the increasing globalization of science: "What role is the US playing now, and what role will it play in the future of international science?"; and b) the ability to produce beneficial innovations for society: "How will the US continue to foster its strengths?"

  15. The United States of America and Scientific Research

    PubMed Central

    Hather, Gregory J.; Haynes, Winston; Higdon, Roger; Kolker, Natali; Stewart, Elizabeth A.; Arzberger, Peter; Chain, Patrick; Field, Dawn; Franza, B. Robert; Lin, Biaoyang; Meyer, Folker; Ozdemir, Vural; Smith, Charles V.; van Belle, Gerald; Wooley, John; Kolker, Eugene

    2010-01-01

    To gauge the current commitment to scientific research in the United States of America (US), we compared federal research funding (FRF) with the US gross domestic product (GDP) and industry research spending during the past six decades. In order to address the recent globalization of scientific research, we also focused on four key indicators of research activities: research and development (R&D) funding, total science and engineering doctoral degrees, patents, and scientific publications. We compared these indicators across three major population and economic regions: the US, the European Union (EU) and the People's Republic of China (China) over the past decade. We discovered a number of interesting trends with direct relevance for science policy. The level of US FRF has varied between 0.2% and 0.6% of the GDP during the last six decades. Since the 1960s, the US FRF contribution has fallen from twice that of industrial research funding to roughly equal. Also, in the last two decades, the portion of the US government R&D spending devoted to research has increased. Although well below the US and the EU in overall funding, the current growth rate for R&D funding in China greatly exceeds that of both. Finally, the EU currently produces more science and engineering doctoral graduates and scientific publications than the US in absolute terms, but not per capita. This study's aim is to facilitate a serious discussion of key questions by the research community and federal policy makers. In particular, our results raise two questions with respect to: a) the increasing globalization of science: “What role is the US playing now, and what role will it play in the future of international science?”; and b) the ability to produce beneficial innovations for society: “How will the US continue to foster its strengths?” PMID:20808949

  16. [Benchmarks for interdisciplinary health and social sciences research: contributions of a research seminar].

    PubMed

    Kivits, Joëlle; Fournier, Cécile; Mino, Jean-Christophe; Frattini, Marie-Odile; Winance, Myriam; Lefève, Céline; Robelet, Magali

    2013-01-01

    This article proposes a reflection on an interdisciplinary seminar, initiated by philosophy and sociology researchers and public health professionals. The objective of this seminar was to explore the mechanisms involved in setting up and conducting interdisciplinary research, by investigating the practical modalities of articulating health and human and social sciences research in order to more clearly understand the conditions, tensions and contributions of collaborative research. These questions were discussed on the basis of detailed analysis of four recent or current research projects. Case studies identified four typical epistemological or methodological issues faced by researchers in the fields of health and human and social sciences: institutional conditions and their effects on research; deconstruction of the object; the researcher's commitment in his/her field; the articulation of research methods. Three prerequisites for interdisciplinary research in social and human sciences and in health were identified: mutual questioning of research positions and fields of study; awareness of the tensions related to institutional positions and disciplinary affiliation; joint elaboration and exchanges between various types of knowledge to ensure an interdisciplinary approach throughout all of the research process.

  17. Classroom research and child and adolescent development in South America.

    PubMed

    Preiss, David Daniel; Calcagni, Elisa; Grau, Valeska

    2015-01-01

    The article reviews recent classroom research developed in South America related to child and adolescent development. We review work about three themes: ethnicity, school climate and violence, and the learning process. The few studies found on ethnicity and classroom experiences told a story of invisibility, if not exclusion and discrimination. Research on violence suggests that, although there are variations within countries, school climate is an area of concern. Intervention work, still limited, is necessary considering the incidence of violence in the classrooms. Research on learning showed that most classrooms adhere to a very conventional pedagogy. There is a need to advance on international comparisons across all themes. Similarly, there is a need to go beyond the description of classroom dynamics to test educational interventions that may shed light on ways to improve educational performance, to decrease school violence, and to promote diversity within the classroom. Notwithstanding its limitations, the research here reviewed provides clear evidence of the relevant role that classroom experiences play in human development. In addition to their essential role in schooling, classrooms are the settings where processes related to peer relations, identity formation, and socioemotional development unfold.

  18. One Health training, research, and outreach in North America

    PubMed Central

    Stroud, Cheryl; Kaplan, Bruce; Logan, Jenae E.

    2016-01-01

    Background The One Health (OH) concept, formerly referred to as ‘One Medicine’ in the later part of the 20th century, has gained exceptional popularity in the early 21st century, and numerous academic and non-academic institutions have developed One Health programs. Objectives To summarize One Health training, research, and outreach activities originating in North America. Methods We used data from extensive electronic records maintained by the One Health Commission (OHC) (www.onehealthcommission.org/) and the One Health Initiative (www.onehealthinitiative.com/) and from web-based searches, combined with the corporate knowledge of the authors and their professional contacts. Finally, a call was released to members of the OHC's Global One Health Community listserv, asking that they populate a Google document with information on One Health training, research, and outreach activities in North American academic and non-academic institutions. Results A current snapshot of North American One Health training, research, and outreach activities as of August 2016 has evolved. Conclusions It is clear that the One Health concept has gained considerable recognition during the first decade of the 21st century, with numerous current training and research activities carried out among North American academic, non-academic, government, corporate, and non-profit entities. PMID:27906120

  19. Montreal Archive of Sleep Studies: an open-access resource for instrument benchmarking and exploratory research.

    PubMed

    O'Reilly, Christian; Gosselin, Nadia; Carrier, Julie; Nielsen, Tore

    2014-12-01

    Manual processing of sleep recordings is extremely time-consuming. Efforts to automate this process have shown promising results, but automatic systems are generally evaluated on private databases, not allowing accurate cross-validation with other systems. In lacking a common benchmark, the relative performances of different systems are not compared easily and advances are compromised. To address this fundamental methodological impediment to sleep study, we propose an open-access database of polysomnographic biosignals. To build this database, whole-night recordings from 200 participants [97 males (aged 42.9 ± 19.8 years) and 103 females (aged 38.3 ± 18.9 years); age range: 18-76 years] were pooled from eight different research protocols performed in three different hospital-based sleep laboratories. All recordings feature a sampling frequency of 256 Hz and an electroencephalography (EEG) montage of 4-20 channels plus standard electro-oculography (EOG), electromyography (EMG), electrocardiography (ECG) and respiratory signals. Access to the database can be obtained through the Montreal Archive of Sleep Studies (MASS) website (http://www.ceams-carsm.ca/en/MASS), and requires only affiliation with a research institution and prior approval by the applicant's local ethical review board. Providing the research community with access to this free and open sleep database is expected to facilitate the development and cross-validation of sleep analysis automation systems. It is also expected that such a shared resource will be a catalyst for cross-centre collaborations on difficult topics such as improving inter-rater agreement on sleep stage scoring.

  20. Building America Systems Integration Research Annual Report. FY 2012

    SciTech Connect

    Gestwick, Michael

    2013-05-01

    This Building America FY2012 Annual Report includes an overview of the Building America Program activities and the work completed by the National Renewable Energy Laboratory and the Building America industry consortia (the Building America teams). The annual report summarizes major technical accomplishments and progress towards U.S. Department of Energy Building Technologies Program's multi-year goal of developing the systems innovations that enable risk-free, cost effective, reliable and durable efficiency solutions that reduce energy use by 30%-50% in both new and existing homes.

  1. Building America Systems Integration Research Annual Report: FY 2012

    SciTech Connect

    Gestwick, M.

    2013-05-01

    This document is the Building America FY2012 Annual Report, which includes an overview of the Building America Program activities and the work completed by the National Renewable Energy Laboratory and the Building America industry consortia (the Building America teams). The annual report summarizes major technical accomplishments and progress towards U.S. Department of Energy Building Technologies Program's multi-year goal of developing the systems innovations that enable risk-free, cost effective, reliable and durable efficiency solutions that reduce energy use by 30%-50% in both new and existing homes.

  2. Towards Sustainability -- Green Building, Sustainability Objectives, and Building America Whole House Systems Research

    SciTech Connect

    None, None

    2008-02-01

    This paper discusses Building America whole-house systems research within the broad effort to reduce or eliminate the environmental impact of building and provides specific recommendations for future Building America research based on Building Science Corporation’s experience with several recent projects involving green home building programs.

  3. The transition on North America from the warm humid Pliocene to the glaciated Quaternary traced by eolian dust deposition at a benchmark North Atlantic Ocean drill site

    NASA Astrophysics Data System (ADS)

    Lang, David C.; Bailey, Ian; Wilson, Paul A.; Beer, Christopher J.; Bolton, Clara T.; Friedrich, Oliver; Newsam, Cherry; Spencer, Megan R.; Gutjahr, Marcus; Foster, Gavin L.; Cooper, Matthew J.; Milton, J. Andrew

    2014-06-01

    We present Plio-Pleistocene records of sediment color, %CaCO3, foraminifer fragmentation, benthic carbon isotopes (δ13C) and radiogenic isotopes (Sr, Nd, Pb) of the terrigenous component from IODP Site U1313, a reoccupation of benchmark subtropical North Atlantic Ocean DSDP Site 607. We show that (inter)glacial cycles in sediment color and %CaCO3 pre-date major northern hemisphere glaciation and are unambiguously and consistently correlated to benthic oxygen isotopes back to 3.3 million years ago (Ma) and intermittently so probably back to the Miocene/Pliocene boundary. We show these lithological cycles to be driven by enhanced glacial fluxes of terrigenous material (eolian dust), not carbonate dissolution (the classic interpretation). Our radiogenic isotope data indicate a North American source for this dust (˜3.3-2.4 Ma) in keeping with the interpreted source of terrestrial plant wax-derived biomarkers deposited at Site U1313. Yet our data indicate a mid latitude provenance regardless of (inter)glacial state, a finding that is inconsistent with the biomarker-inferred importance of glaciogenic mechanisms of dust production and transport. Moreover, we find that the relation between the biomarker and lithogenic components of dust accumulation is distinctly non-linear. Both records show a jump in glacial rates of accumulation from Marine Isotope Stage, MIS, G6 (2.72 Ma) onwards but the amplitude of this signal is about 3-8 times greater for biomarkers than for dust and particularly extreme during MIS 100 (2.52 Ma). We conclude that North America shifted abruptly to a distinctly more arid glacial regime from MIS G6, but major shifts in glacial North American vegetation biomes and regional wind fields (exacerbated by the growth of a large Laurentide Ice Sheet during MIS 100) likely explain amplification of this signal in the biomarker records. Our findings are consistent with wetter-than-modern reconstructions of North American continental climate under the warm high

  4. Toward Establishing a Realistic Benchmark for Airframe Noise Research: Issues and Challenges

    NASA Technical Reports Server (NTRS)

    Khorrami, Mehdi R.

    2010-01-01

    The availability of realistic benchmark configurations is essential to enable the validation of current Computational Aeroacoustic (CAA) methodologies and to further the development of new ideas and concepts that will foster the technologies of the next generation of CAA tools. The selection of a real-world configuration, the subsequent design and fabrication of an appropriate model for testing, and the acquisition of the necessarily comprehensive aeroacoustic data base are critical steps that demand great care and attention. In this paper, a brief account of the nose landing-gear configuration, being proposed jointly by NASA and the Gulfstream Aerospace Company as an airframe noise benchmark, is provided. The underlying thought processes and the resulting building block steps that were taken during the development of this benchmark case are given. Resolution of critical, yet conflicting issues is discussed - the desire to maintain geometric fidelity versus model modifications required to accommodate instrumentation; balancing model scale size versus Reynolds number effects; and time, cost, and facility availability versus important parameters like surface finish and installation effects. The decisions taken during the experimental phase of a study can significantly affect the ability of a CAA calculation to reproduce the prevalent flow conditions and associated measurements. For the nose landing gear, the most critical of such issues are highlighted and the compromises made to resolve them are discussed. The results of these compromises will be summarized by examining the positive attributes and shortcomings of this particular benchmark case.

  5. Bibliometric analysis of Oropouche research: impact on the surveillance of emerging arboviruses in Latin America

    PubMed Central

    Culquichicón, Carlos; Cardona-Ospina, Jaime A.; Patiño-Barbosa, Andrés M.; Rodriguez-Morales, Alfonso J.

    2017-01-01

    Given the emergence and reemergence of viral diseases, particularly in Latin America, we would like to provide an analysis of the patterns of research and publication on Oropouche virus (OROV). We also discuss the implications of recent epidemics in certain areas of South America, and how more clinical and epidemiological information regarding OROV is urgently needed. PMID:28357048

  6. Policy for Research and Innovation in Latin America

    NASA Astrophysics Data System (ADS)

    Aguirre-Bastos, Carlos

    2010-02-01

    Latin America (LAC) is renewing efforts to build-up research and innovation (R&I) capacities, guided by policies that consider the need to transform the traditional science system into a more dynamic entity. Policies permitted the generation of new spaces to develop science, strengthen scientific communities, improve university-enterprise linkages, establish common agendas between public and private sectors, earmark special budgets, build new infrastructure, and improve the number and quality of scientific publications. In spite of much progress, LAC lags much behind developed countries, their universities rank lower than their international counterparts, the number of researchers is small and funding is below an appropriate threshold. Some countries have innovated in few economic sectors, while others remain technologically underdeveloped and much of the countries' innovative capacities remain untapped. It is believed that policies still have little influence on social and economic development and there exists dissatisfaction in the academic and entrepreneurial sectors with their quality and relevance or with the political will of governments to execute them. On the other hand, in the past decades, the complexity of innovation systems has increased considerably, and has yet to be taken fully into account in LAC policy definitions. The situation calls for decision makers to shape new framework conditions for R&I in a way that both processes co-evolve and are stimulated and guided on solutions to the major problems of society. Considering the main features of complex systems, self- organization, emergence and non-linearity, R&I policy measures need to be seen as interventions in such a system, as the use of traditional leverage effects used in the past for policy decisions are more and more obsolete. Policies must now use ``weak coordination mechanisms,'' foresight, mission statements, and visions. It is obvious that due to nonlinearities in the system, adaptive

  7. Research update on H5Nx HPAI in the Americas

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A recent outbreak of highly pathogenic avian influenza has occurred in the United States in late 2014. The lineage of virus can be traced back to Chinese strains from 1996. The virus has moved from poultry into wild birds, and the virus is thought to have spread into the Americas during the breedi...

  8. Building America System Research Results. Innovations for High Performance Homes

    SciTech Connect

    none,

    2006-05-01

    This report provides a summary of key lessons learned from the first 10 years of the Building America program and also included a summary of the future challenges that must be met to reach the program’s long term performance goals.

  9. School Readiness Research in Latin America: Findings and Challenges

    ERIC Educational Resources Information Center

    Strasser, Katherine; Rolla, Andrea; Romero-Contreras, Silvia

    2016-01-01

    Educational results in Latin America (LA) are well below those of developed countries. One factor that influences how well children do at school is school readiness. In this article, we review studies conducted in LA on the readiness skills of preschool children. We begin by discussing contextual factors that affect what is expected of children…

  10. Stem cell research in Latin America: update, challenges and opportunities in a priority research area.

    PubMed

    Palma, Verónica; Pitossi, Fernando J; Rehen, Stevens K; Touriño, Cristina; Velasco, Iván

    2015-01-01

    Stem cell research is attracting wide attention as a promising and fast-growing field in Latin America, as it is worldwide. Many countries in the region have defined Regenerative Medicine as a research priority and a focus of investment. This field generates not only opportunities but also regulatory, technical and operative challenges. In this review, scientists from Uruguay, Mexico, Chile, Brazil and Argentina provide their view on stem cell research in each of their countries. Despite country-specific characteristics, all countries share several issues such as regulatory challenges. Key initiatives of each country to promote stem cell research are also discussed. As a conclusion, it is clear that regional integration should be more emphasized and international collaboration, promoted.

  11. Managing for Results in America's Great City Schools 2014: Results from Fiscal Year 2012-13. A Report of the Performance Measurement and Benchmarking Project

    ERIC Educational Resources Information Center

    Council of the Great City Schools, 2014

    2014-01-01

    In 2002 the "Council of the Great City Schools" and its members set out to develop performance measures that could be used to improve business operations in urban public school districts. The Council launched the "Performance Measurement and Benchmarking Project" to achieve these objectives. The purposes of the project was to:…

  12. Building America Residential System Research Results: Achieving 30% Whole House Energy Savings Level in the Hot-Dry and Mixed-Dry Climates

    SciTech Connect

    Building Industry Research Alliance; Building Science Consortium; Consortium for Advanced Residential Buildings; Davis Energy Group; Florida Solar Energy Center; IBACOS; National Association of Home Builders Research Center; National Renewable Energy Laboratory

    2006-01-01

    The Building America program conducts the system research required to reduce risks associated with the design and construction of homes that use an average of 30% to 90% less total energy for all residential energy uses than the Building America Research Benchmark, including research on homes that will use zero net energy on annual basis. To measure the program's progress, annual research milestones have been established for five major climate regions in the United States. The system research activities required to reach each milestone take from 3 to 5 years to complete and include research in individual test houses, studies in pre-production prototypes, and research studies with lead builders that provide early examples that the specified energy savings level can be successfully achieved on a production basis. This report summarizes research results for the 30% energy savings level and demonstrates that lead builders can successfully provide 30% homes in the Hot-Dry/Mixed-Dry Climate Region on a cost neutral basis.

  13. Summary of Prioritized Research Opportunities. Building America Planning Meeting, November 2-4, 2010

    SciTech Connect

    none,

    2011-02-01

    This report outlines the results of brainstorming sessions conducted at the Building America Fall 2010 planning meeting, in which research teams and national laboratories identified key research priorities to incorporate into multi-year planning, team research agendas, expert meetings, and technical standing committees.

  14. Limitations of Community College Benchmarking and Benchmarks

    ERIC Educational Resources Information Center

    Bers, Trudy H.

    2006-01-01

    This chapter distinguishes between benchmarks and benchmarking, describes a number of data and cultural limitations to benchmarking projects, and suggests that external demands for accountability are the dominant reason for growing interest in benchmarking among community colleges.

  15. New global change research effort launched in Latin America

    NASA Astrophysics Data System (ADS)

    McClain, Michael E.; Galarraga, Remigio H.

    Latin America's mountains extend in a nearly unbroken chain from Mexico to Chile and reach elevations above 6000 m.The valleys of these mountains have seen the evolution of diverse and unique ecosystems, as well as the development of several of the region's most celebrated pre-Columbian civilizations.Today these same valleys contain some of Earth's most rapidly changing landscapes, as urban and agricultural frontiers expand and historical land use changes in response to new pressures from economic globalization.These valleys also face poorly understood threats linked to global climate change.

  16. Changing patterns of migration in Latin America: how can research develop intelligence for public health?

    PubMed

    Cabieses, Baltica; Tunstall, Helena; Pickett, Kate E; Gideon, Jasmine

    2013-07-01

    Migration patterns in Latin America have changed significantly in recent decades, particularly since the onset of global recession in 2007. These recent economic changes have highlighted and exacerbated the weakness of evidence from Latin America regarding migration-a crucial determinant of health. Migration patterns are constantly evolving in Latin America, but research on migration has not developed at the same speed. This article focuses on the need for better understanding of the living conditions and health of migrant populations in Latin America within the context of the recent global recession. The authors explain how new data on migrant well-being could be obtained through improved evidence from censuses and ongoing research surveys to 1) better inform policy-makers about the needs of migrant populations in Latin America and 2) help determine better ways of reaching undocumented immigrants. Longitudinal studies on immigrants in Latin America are essential for generating a better representation of migrant living conditions and health needs during the initial stages of immigration and over time. To help meet this need, the authors support the promotion of sustainable sources of data and evidence on the complex relationship between migration and health.

  17. Austin Community College Benchmarking Update.

    ERIC Educational Resources Information Center

    Austin Community Coll., TX. Office of Institutional Effectiveness.

    Austin Community College contracted with MGT of America, Inc. in spring 1999 to develop a peer and benchmark (best) practices analysis on key indicators. These indicators were updated in spring 2002 using data from eight Texas community colleges and four non-Texas institutions that represent large, comprehensive, urban community colleges, similar…

  18. Recent Trends in Soil Science and Agronomy Research in the Northern Great Plains of North America

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The book “Recent Trends in Soil Science and Agronomy Research in the Northern Great Plains of North America” summarizes published research in soil science and agronomy from various field experiments conducted in the soil-climatic/agro-ecological regions of the Northern Great Plains of North America....

  19. Cardiovascular Research Publications from Latin America between 1999 and 2008. A Bibliometric Study

    PubMed Central

    Colantonio, Lisandro D.; Baldridge, Abigail S.; Huffman, Mark D.; Bloomfield, Gerald S.; Prabhakaran, Dorairaj

    2015-01-01

    Background Cardiovascular research publications seem to be increasing in Latin America overall. Objective To analyze trends in cardiovascular publications and their citations from countries in Latin America between 1999 and 2008, and to compare them with those from the rest of the countries. Methods We retrieved references of cardiovascular publications between 1999 and 2008 and their five-year post-publication citations from the Web of Knowledge database. For countries in Latin America, we calculated the total number of publications and their citation indices (total citations divided by number of publications) by year. We analyzed trends on publications and citation indices over time using Poisson regression models. The analysis was repeated for Latin America as a region, and compared with that for the rest of the countries grouped according to economic development. Results Brazil (n = 6,132) had the highest number of publications in1999-2008, followed by Argentina (n = 1,686), Mexico (n = 1,368) and Chile (n = 874). Most countries showed an increase in publications over time, leaded by Guatemala (36.5% annually [95%CI: 16.7%-59.7%]), Colombia (22.1% [16.3%-28.2%]), Costa Rica (18.1% [8.1%-28.9%]) and Brazil (17.9% [16.9%-19.1%]). However, trends on citation indices varied widely (from -33.8% to 28.4%). From 1999 to 2008, cardiovascular publications of Latin America increased by 12.9% (12.1%-13.5%) annually. However, the citation indices of Latin America increased 1.5% (1.3%-1.7%) annually, a lower increase than those of all other country groups analyzed. Conclusions Although the number of cardiovascular publications of Latin America increased from 1999 to 2008, trends on citation indices suggest they may have had a relatively low impact on the research field, stressing the importance of considering quality and dissemination on local research policies. PMID:25714407

  20. Public Interest Energy Research (PIER) Program Development of a Computer-based Benchmarking and Analytical Tool. Benchmarking and Energy & Water Savings Tool in Dairy Plants (BEST-Dairy)

    SciTech Connect

    Xu, Tengfang; Flapper, Joris; Ke, Jing; Kramer, Klaas; Sathaye, Jayant

    2012-02-01

    The overall goal of the project is to develop a computer-based benchmarking and energy and water savings tool (BEST-Dairy) for use in the California dairy industry - including four dairy processes - cheese, fluid milk, butter, and milk powder. BEST-Dairy tool developed in this project provides three options for the user to benchmark each of the dairy product included in the tool, with each option differentiated based on specific detail level of process or plant, i.e., 1) plant level; 2) process-group level, and 3) process-step level. For each detail level, the tool accounts for differences in production and other variables affecting energy use in dairy processes. The dairy products include cheese, fluid milk, butter, milk powder, etc. The BEST-Dairy tool can be applied to a wide range of dairy facilities to provide energy and water savings estimates, which are based upon the comparisons with the best available reference cases that were established through reviewing information from international and national samples. We have performed and completed alpha- and beta-testing (field testing) of the BEST-Dairy tool, through which feedback from voluntary users in the U.S. dairy industry was gathered to validate and improve the tool's functionality. BEST-Dairy v1.2 was formally published in May 2011, and has been made available for free downloads from the internet (i.e., http://best-dairy.lbl.gov). A user's manual has been developed and published as the companion documentation for use with the BEST-Dairy tool. In addition, we also carried out technology transfer activities by engaging the dairy industry in the process of tool development and testing, including field testing, technical presentations, and technical assistance throughout the project. To date, users from more than ten countries in addition to those in the U.S. have downloaded the BEST-Dairy from the LBNL website. It is expected that the use of BEST-Dairy tool will advance understanding of energy and water

  1. Research Universities and the Future of America: Ten Breakthrough Actions Vital to Our Nation's Prosperity and Security

    ERIC Educational Resources Information Center

    National Academies Press, 2012

    2012-01-01

    "Research Universities and the Future of America" presents critically important strategies for ensuring that our nation's research universities contribute strongly to America's prosperity, security, and national goals. Widely considered the best in the world, our nation's research universities today confront significant financial…

  2. Review on space weather in Latin America. 1. The beginning from space science research

    NASA Astrophysics Data System (ADS)

    Denardini, Clezio Marcos; Dasso, Sergio; Gonzalez-Esparza, J. Americo

    2016-11-01

    The present work is the first of a three-part review on space weather in Latin America. It comprises the evolution of several Latin American institutions investing in space science since the 1960s, focusing on the solar-terrestrial interactions, which today is commonly called space weather. Despite recognizing advances in space research in all of Latin America, this review is restricted to the development observed in three countries in particular (Argentina, Brazil and Mexico), due to the fact that these countries have recently developed operational centers for monitoring space weather. The review starts with a brief summary of the first groups to start working with space science in Latin America. This first part of the review closes with the current status and the research interests of these groups, which are described in relation to the most significant works and challenges of the next decade in order to aid in the solving of space weather open issues.

  3. Researchers Dispute Notion that America Lacks Scientists and Engineers

    ERIC Educational Resources Information Center

    Monastersky, Richard

    2007-01-01

    Researchers who track the American labor market told Congress last week that, contrary to conventional wisdom, the United States has more than enough scientists and engineers and that federal agencies and universities should reform the way they train young scientists to better match the supply of scientists with the demand for researchers. At a…

  4. REVIEW OF CONTEMPORARY RESEARCH ON LITERACY AND ADULT EDUCATION IN LATIN AMERICA.

    ERIC Educational Resources Information Center

    MARQUARDT, WILLIAM F.

    A REVIEW OF RESEARCH CATEGORIZES LITERACY AND ADULT BASIC EDUCATION IN LATIN AMERICA AS FOLLOWS--(1) GENERAL REPORTS OF THE NUMBERS AND OCCUPATIONAL TYPES OF ILLITERATES IN EACH COUNTRY--(2) REPORTS OF THE ACTIVITIES AND ACCOMPLISHMENTS OF PUBLIC, PRIVATE, AND INTERNATIONAL ORGANIZATIONS AND GROUPS IN PROMOTING LITERACY AND ADULT BASIC…

  5. Biogeochemical research priorities for sustainable biofuel and bioenergy feedstock production in the Americas

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Rapid expansion in biomass production for biofuels and bioenergy in the Americas is increasing demands on the ecosystem resources required to sustain soil and site productivity. We review the current state of knowledge and highlight gaps in research on biogeochemical processes and ecosystem sustaina...

  6. Agenda 2020: A Technology Vision and Research Agenda for America's Forest, Wood and Paper Industry

    SciTech Connect

    none,

    1994-11-01

    In November 1994, the forest products industry published Agenda 2020: A Technology Vision and Research Agenda for America's Forest, Wood and Paper Industry, which articulated the industry's vision. This document set the foundation for collaborative efforts between the industry and the federal government.

  7. Urban Teaching in America: Theory, Research, and Practice in K-12 Classrooms

    ERIC Educational Resources Information Center

    Stairs, Andrea J.; Donnell, Kelly A.; Dunn, Alyssa Hadley

    2011-01-01

    "Urban Teaching in America: Theory, Research, and Practice in K-12 Classrooms" is a brief yet comprehensive overview of urban teaching. Undergraduate and graduate students who are new to the urban context will develop a deeper understanding of the urban teaching environment and the challenges and opportunities they can expect to face while…

  8. Using Participatory Action Research to Study the Implementation of Career Development Benchmarks at a New Zealand University

    ERIC Educational Resources Information Center

    Furbish, Dale S.; Bailey, Robyn; Trought, David

    2016-01-01

    Benchmarks for career development services at tertiary institutions have been developed by Careers New Zealand. The benchmarks are intended to provide standards derived from international best practices to guide career development services. A new career development service was initiated at a large New Zealand university just after the benchmarks…

  9. The law and politics of embryo research in America.

    PubMed

    Snead, O Carter

    2011-01-01

    The moral, legal, and public policy dispute over embryonic stem cell research (and related matters, such as human cloning) is the most prominent issue in American public bioethics of the past decade. The primary moral question raised by the practice of embryonic stem cell research is whether it is defensible to disaggregate (and thus destroy) living human embryos in order to derive pluripotent cells (stem cells) for purposes of basic research that may someday yield regenerative therapies. This essay will explain the legal and political dimensions of the embryonic stem cell debate as it has unfolded at the national level in the United States, contrasting the position and thinking of President Clinton's administration with that of George W Bush. Building upon this, a set of brief reflections is offered on the form and substance of the American federal approach to this public matter and whether it is ultimately sustainable to join the issue in this particular way.

  10. Research and application of biochar in North America

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Biochar production and application in soil are proposed as a good strategy for carbon sequestration, providing simultaneous benefits for improving soil quality and increasing agronomic productivity. In this chapter, we summarized historic and current researches and application of biochar in North Am...

  11. "Salud America!" Developing a National Latino Childhood Obesity Research Agenda

    ERIC Educational Resources Information Center

    Ramirez, Amelie G.; Chalela, Patricia; Gallion, Kipling J.; Green, Lawrence W.; Ottoson, Judith

    2011-01-01

    U.S. childhood obesity has reached epidemic proportions, with one third of children overweight or obese. Latino children have some of the highest obesity rates, a concern because they are part of the youngest and fastest-growing U.S. minority group. Unfortunately, scarce research data on Latinos hinders the development and implementation of…

  12. The NAS kernel benchmark program

    NASA Technical Reports Server (NTRS)

    Bailey, D. H.; Barton, J. T.

    1985-01-01

    A collection of benchmark test kernels that measure supercomputer performance has been developed for the use of the NAS (Numerical Aerodynamic Simulation) program at the NASA Ames Research Center. This benchmark program is described in detail and the specific ground rules are given for running the program as a performance test.

  13. Self-benchmarking Guide for Data Centers: Metrics, Benchmarks, Actions

    SciTech Connect

    Mathew, Paul; Ganguly, Srirupa; Greenberg, Steve; Sartor, Dale

    2009-07-13

    This guide describes energy efficiency metrics and benchmarks that can be used to track the performance of and identify potential opportunities to reduce energy use in data centers. This guide is primarily intended for personnel who have responsibility for managing energy use in existing data centers - including facilities managers, energy managers, and their engineering consultants. Additionally, data center designers may also use the metrics and benchmarks described in this guide for goal-setting in new construction or major renovation. This guide provides the following information: (1) A step-by-step outline of the benchmarking process. (2) A set of performance metrics for the whole building as well as individual systems. For each metric, the guide provides a definition, performance benchmarks, and potential actions that can be inferred from evaluating this metric. (3) A list and descriptions of the data required for computing the metrics. This guide is complemented by spreadsheet templates for data collection and for computing the benchmarking metrics. This guide builds on prior data center benchmarking studies supported by the California Energy Commission. Much of the benchmarking data are drawn from the LBNL data center benchmarking database that was developed from these studies. Additional benchmark data were obtained from engineering experts including facility designers and energy managers. This guide also builds on recent research supported by the U.S. Department of Energy's Save Energy Now program.

  14. Benchmarking, Research, Development, and Support for ORNL Automated Image and Signature Retrieval (AIR/ASR) Technologies

    SciTech Connect

    Tobin, K.W.

    2004-06-01

    This report describes the results of a Cooperative Research and Development Agreement (CRADA) with Applied Materials, Inc. (AMAT) of Santa Clara, California. This project encompassed the continued development and integration of the ORNL Automated Image Retrieval (AIR) technology, and an extension of the technology denoted Automated Signature Retrieval (ASR), and other related technologies with the Defect Source Identification (DSI) software system that was under development by AMAT at the time this work was performed. In the semiconductor manufacturing environment, defect imagery is used to diagnose problems in the manufacturing line, train yield management engineers, and examine historical data for trends. Image management in semiconductor data systems is a growing cause of concern in the industry as fabricators are now collecting up to 20,000 images each week. In response to this concern, researchers at the Oak Ridge National Laboratory (ORNL) developed a semiconductor-specific content-based image retrieval method and system, also known as AIR. The system uses an image-based query-by-example method to locate and retrieve similar imagery from a database of digital imagery using visual image characteristics. The query method is based on a unique architecture that takes advantage of the statistical, morphological, and structural characteristics of image data, generated by inspection equipment in industrial applications. The system improves the manufacturing process by allowing rapid access to historical records of similar events so that errant process equipment can be isolated and corrective actions can be quickly taken to improve yield. The combined ORNL and AMAT technology is referred to hereafter as DSI-AIR and DSI-ASR.

  15. Review on space weather in Latin America. 2. The research networks ready for space weather

    NASA Astrophysics Data System (ADS)

    Denardini, Clezio Marcos; Dasso, Sergio; Gonzalez-Esparza, J. Americo

    2016-11-01

    The present work is the second of a three-part review of space weather in Latin America, specifically observing its evolution in three countries (Argentina, Brazil and Mexico). This work comprises a summary of scientific challenges in space weather research that are considered to be open scientific questions and how they are being addressed in terms of instrumentation by the international community, including the Latin American groups. We also provide an inventory of the networks and collaborations being constructed in Latin America, including details on the data processing, capabilities and a basic description of the resulting variables. These instrumental networks currently used for space science research are gradually being incorporated into the space weather monitoring data pipelines as their data provides key variables for monitoring and forecasting space weather, which allow these centers to monitor space weather and issue watches, warnings and alerts.

  16. The Applied Meteorology Unit: Nineteen Years Successfully Transitioning Research Into Operations for America's Space Program

    NASA Technical Reports Server (NTRS)

    Madura, John T.; Bauman, William H., III; Merceret, Francis J.; Roeder, William P.; Brody, Frank C.; Hagemeyer, Bartlett C.

    2011-01-01

    The Applied Meteorology Unit (AMU) provides technology development and transition services to improve operational weather support to America's space program . The AMU was founded in 1991 and operates under a triagency Memorandum of Understanding (MOU) between the National Aeronautics and Space Administration (NASA), the United States Air Force (USAF) and the National Weather Service (NWS) (Ernst and Merceret, 1995). It is colocated with the 45th Weather Squadron (45WS) at Cape Canaveral Air Force Station (CCAFS) and funded by the Space Shuttle Program . Its primary customers are the 45WS, the Spaceflight Meteorology Group (SMG) operated for NASA by the NWS at the Johnson Space Center (JSC) in Houston, TX, and the NWS forecast office in Melbourne, FL (MLB). The gap between research and operations is well known. All too frequently, the process of transitioning research to operations fails for various reasons. The mission of the AMU is in essence to bridge this gap for America's space program.

  17. Rodent middens, a new method for Quaternary research in arid zones of South America

    USGS Publications Warehouse

    Betancourt, J.L.; Saavedra, B.

    2002-01-01

    In arid and semi-arid regions of South America, historical evidence for climate and vegetation change is scarce despite its importance for determining reference conditions and rates of natural variability in areas susceptible to modern desertification. Normal lines of evidence, such as pollen stratigraphies from lakes, are either rare or unobtainable in deserts; studies of late Quaternary vegetation history are few and generally inconclusive. This gap in knowledge may be corrected with discovery and development of fossil rodent middens in rocky environments throughout arid South America. These middens, mostly the work of Lagidium, Phyllotis, Abrocoma and Octodontomys, are rich in readily identifiable plant macrofossils, cuticles and pollen, as well as vertebrate and insect remains. In the North American deserts, more than 2,500 woodrat (Neotoma) middens analyzed since 1960 have yielded a detailed history of environmental change during the past 40,000 years. Preliminary work in the pre-puna, Monte and Patagonian Deserts of western Argentina, the Atacama Desert of northern Chile/southern Peru, the Mediterranean matorral of central Chile, and the Puna of the Andean altiplano suggest a similar potential for rodent middens in South America. Here we borrow from the North American experience to synthesize methodologies and approaches, summarize preliminary work, and explore the potential of rodent midden research in South America.

  18. Critical uncertainties and research needs for the restoration and conservation of native lampreys in North America

    USGS Publications Warehouse

    Mesa, Matthew G.; Copeland, Elizabeth S.

    2009-01-01

    We briefly reviewed the literature, queried selected researchers, and drew upon our own experience to describe some critical uncertainties and research needs for the conservation and restoration of native lampreys in North America. We parsed the uncertainties and research needs into five general categories: (1) population status; (2) systematics; (3) passage at dams, screens, and other structures; (4) species identification in the field; and (5) geneal biology and ecology. For each topic, we describe why the subject is important for lampreys, briefly smmarize our current state of knowledge, and discuss the key data or information gaps.

  19. The establishment of an attachment research network in Latin America: goals, accomplishments, and challenges.

    PubMed

    Causadias, José M; Sroufe, L Alan; Herreros, Francisca

    2011-03-01

    In the face of a pressing need for expanded attachment research programs and attachment informed interventions in Latin America, a research network was established: Red Iberoamericana de Apego: RIA (Iberian-American Attachment Network). The purpose of RIA is to promote human development and well being, informed by attachment theory, centering on research, and with implications for public policies, education, and intervention. We report the proceedings of the second meeting of RIA held in Panama City, Panama, in February 2010. As part of this meeting, RIA sponsored the first Latin-American attachment conference. Proceedings of the conference are described, as are future goals of this new organization.

  20. Universal Access to Health and Universal Health Coverage: identification of nursing research priorities in Latin America

    PubMed Central

    Cassiani, Silvia Helena De Bortoli; Bassalobre-Garcia, Alessandra; Reveiz, Ludovic

    2015-01-01

    Objective: To estabilish a regional list for nursing research priorities in health systems and services in the Region of the Americas based on the concepts of Universal Access to Health and Universal Health Coverage. Method: five-stage consensus process: systematic review of literature; appraisal of resulting questions and topics; ranking of the items by graduate program coordinators; discussion and ranking amongst a forum of researchers and public health leaders; and consultation with the Ministries of Health of the Pan American Health Organization's member states. Results: the resulting list of nursing research priorities consists of 276 study questions/ topics, which are sorted into 14 subcategories distributed into six major categories: 1. Policies and education of nursing human resources; 2. Structure, organization and dynamics of health systems and services; 3. Science, technology, innovation, and information systems in public health; 4. Financing of health systems and services; 5. Health policies, governance, and social control; and 6. Social studies in the health field. Conclusion: the list of nursing research priorities is expected to serve as guidance and support for nursing research on health systems and services across Latin America. Not only researchers, but also Ministries of Health, leaders in public health, and research funding agencies are encouraged to use the results of this list to help inform research-funding decisions. PMID:26487014

  1. Tle and Heet Research in South America in the Framework of Leona Collaborative Network

    NASA Astrophysics Data System (ADS)

    Sao Sabbas, F.

    2013-12-01

    South America is one of the most active thunderstorm regions of the world. About 20 years ago, it was discovered that thunderstorm electrical activity, in the form of lightning discharges, can excite Transient Luminous Events - TLEs in the upper atmosphere directly above it. More recently, measurements of High Energy Emissions from Thunderstorms - HEET from space revealed that they also produce high energy emissions. Up to date, six different field campaigns, between 2002 and 2012 have been successfully performed in Brazil to make TLE observations. More than 700 events, mainly sprites, have been recorded over thunderstorms in different places in South America during these campaigns. Given the high thunderstorm electrical activity in our region, it is expected to have an extremely high TLE occurrence rate as well as intense emission of TGFs, high energy electron beams, neutron beam, X-Rays, i.e. HEET in general. This paper will review the main results of the different TLE observations performed from Brazil up to date and the research on TLEs and HEET performed by the Atmospheric and Space Electrodynamical Coupling - ACATMOS group at INPE. It will introduce the LEONA: Transient Luminous Event and Thunderstorm High Energy Emission Collaborative Network in Latin America. The team unites scientists of research institutions from several countries to investigate TLEs, HEET and related phenomena. It will present the collaborative network of cameras that are already been installed in Brazil, Peru and Argentina for continuous remote observation of TLE, and the prospective installation of 21 optical observation sites in other locations in South America, as well as a neutron detector in southern Brazil. The LEONA project has been recently approved by the Brazilian research funding agency FAPESP. The Argentina sites will allow for combined collaborative studies of TLEs and HEET produced by thunderstorms in the Pampas region, where the most several thunderstorms in South America

  2. Performance Trajectories and Performance Gaps as Achievement Effect-Size Benchmarks for Educational Interventions. MDRC Working Papers on Research Methodology

    ERIC Educational Resources Information Center

    Bloom, Howard S.; Hill, Carolyn J.; Black, Alison Rebeck; Lipsey, Mark W.

    2008-01-01

    This paper explores two complementary approaches to developing empirical benchmarks for achievement effect sizes in educational interventions. The first approach characterizes the natural developmental progress in achievement by students from one year to the next as effect sizes. Data for seven nationally standardized achievement tests show large…

  3. Benchmarking in Student Affairs.

    ERIC Educational Resources Information Center

    Mosier, Robert E.; Schwarzmueller, Gary J.

    2002-01-01

    Discusses the use of benchmarking in student affairs, focusing on issues related to student housing. Provides examples of how benchmarking has influenced administrative practice at many institutions. (EV)

  4. Benchmark Airport Charges

    NASA Technical Reports Server (NTRS)

    de Wit, A.; Cohn, N.

    1999-01-01

    The Netherlands Directorate General of Civil Aviation (DGCA) commissioned Hague Consulting Group (HCG) to complete a benchmark study of airport charges at twenty eight airports in Europe and around the world, based on 1996 charges. This study followed previous DGCA research on the topic but included more airports in much more detail. The main purpose of this new benchmark study was to provide insight into the levels and types of airport charges worldwide and into recent changes in airport charge policy and structure. This paper describes the 1996 analysis. It is intended that this work be repeated every year in order to follow developing trends and provide the most up-to-date information possible.

  5. Benchmarking in academic pharmacy departments.

    PubMed

    Bosso, John A; Chisholm-Burns, Marie; Nappi, Jean; Gubbins, Paul O; Ross, Leigh Ann

    2010-10-11

    Benchmarking in academic pharmacy, and recommendations for the potential uses of benchmarking in academic pharmacy departments are discussed in this paper. Benchmarking is the process by which practices, procedures, and performance metrics are compared to an established standard or best practice. Many businesses and industries use benchmarking to compare processes and outcomes, and ultimately plan for improvement. Institutions of higher learning have embraced benchmarking practices to facilitate measuring the quality of their educational and research programs. Benchmarking is used internally as well to justify the allocation of institutional resources or to mediate among competing demands for additional program staff or space. Surveying all chairs of academic pharmacy departments to explore benchmarking issues such as department size and composition, as well as faculty teaching, scholarly, and service productivity, could provide valuable information. To date, attempts to gather this data have had limited success. We believe this information is potentially important, urge that efforts to gather it should be continued, and offer suggestions to achieve full participation.

  6. Building America

    SciTech Connect

    Brad Oberg

    2010-12-31

    IBACOS researched the constructability and viability issues of using high performance windows as one component of a larger approach to building houses that achieve the Building America 70% energy savings target.

  7. Working ethics: William Beaumont, Alexis St. Martin, and medical research in antebellum America.

    PubMed

    Green, Alexa

    2010-01-01

    Analyzing William Beaumont's relationship with his experimental subject, Alexis St. Martin, this article demonstrates how the "research ethics" of antebellum America were predicated on models of employment, servitude, and labor. The association between Beaumont and St. Martin drew from and was understood in terms of the ideas and practices of contract labor, informal domestic servitude, indentures, and military service. Beaumont and St. Martin lived through an important period of transition in which personal master-servant relations existed alongside the "free" contract labor of market capitalism. Their relationship reflected and helped constitute important developments in nineteenth-century American labor history.

  8. Summary of Prioritized Research Opportunities: Building America Program Planning Meeting, Washington, D.C., November 2-4, 2010

    SciTech Connect

    Not Available

    2011-02-01

    This report outlines the results of brainstorming sessions conducted at the Building America Fall 2010 planning meeting, in which research teams and national laboratories identified key research priorities to incorporate into multi-year planning, team research agendas, expert meetings, and technical standing committees.

  9. The current status of ethnobiological research in Latin America: gaps and perspectives

    PubMed Central

    2013-01-01

    Background Recent reviews have demonstrated an increase in the number of papers on ethnobiology in Latin America. Among factors that have influenced this increase are the biological and cultural diversity of these countries and the general scientific situation in some countries. This study aims to assess the panorama of ethnobiological research in Latin America by analyzing its evolution, trends, and future prospects. Methods To conduct this study, we searched for papers in the Scopus (http://www.scopus.com) and Web of Science (http://www.isiknowledge.com) databases. The search was performed using combinations of keywords and the name of each Latin American country. The following countries were included in this study: Argentina, Bolivia, Brazil, Chile, Colombia, Costa Rica, Cuba, Ecuador, Guatemala, Haiti, Honduras, Mexico, Panama, Paraguay, Peru, Venezuela, and Uruguay. Results and conclusions According to our inclusion criteria, 679 ethnobiological studies conducted in Latin America were found for the period between 1963 and 2012. Of these studies, 289 (41%) were conducted in Brazil, 153 in Mexico (22%), 61 in Peru (9%), 58 in Argentina (8%), 45 in Bolivia (6%), and 97 (14%) in other Latin American countries. The increased number of publications related to this area of knowledge in recent years demonstrates the remarkable growth of ethnobiology as a science. Ethnobiological research may be stimulated by an increase in the number of scientific events and journals for study dissemination and by the creation of undergraduate courses and graduate programs to train ethnoscientists who will produce high-quality studies, especially in certain countries. PMID:24131758

  10. La Investigacion Participativa en America Latina: Retablo de Papel, 10 (Participatory Research in Latin America: Series, 10).

    ERIC Educational Resources Information Center

    Vejarano, Gilberto M., Comp.

    The following papers (titles are translated into English) were presented at a conference on participatory research: "Participatory Research, Popular Knowledge, and Power"; "Participatory Research and Adult Literacy"; "Developments and Perspectives on Participatory Research"; "Popular Education and Participatory…

  11. Review of mammalogical research in the Guianas of northern South America.

    PubMed

    Lim, Burton K

    2016-03-01

    Research on mammals in the Guianas of northern South America has had a checkered history. In this review, I summarize the notable contributions to mammalogical study in Guyana, Suriname and French Guiana. These studies began in the mid-18th century with the binomial nomenclature system of scientific classification created by the Swedish naturalist Carl Linnaeus, who described 23 species new to science based on holotype specimens from the Guianas. Notwithstanding popular accounts by amateur naturalists visiting this region, over the next 7 decades there was only sporadic taxonomic work done on Guianan mammals primarily by researchers at European museums. The first comprehensive biological exploration took place in the 1840s during a geographic survey of the boundaries of British Guiana. However, it was not until almost half a century later that scientific publications began to regularly document the increasing species diversity in the region, including the prodigious work of Oldfield Thomas at the British Museum of Natural History in London. Another lull in the study of mammals occurred in the mid-1910s to the early 1960s after which foreign researchers began to rediscover the Guianas and their pristine habitats. This biological renaissance is still ongoing and I give a prospectus on the direction of future research in one of the last frontiers of tropical rainforest. An initiative that would be greatly beneficial is the establishment of a university network in the Guianas with graduate-based research to develop a cadre of professional experts on biodiversity and evolution as seen in other countries of South America.

  12. National research for health systems in Latin America and the Caribbean: moving towards the right direction?

    PubMed Central

    2014-01-01

    Background National Research for Health Systems (NRfHS) in Latin America and the Caribbean (LAC) have shown growth and consolidation in the last few years. A structured, organized system will facilitate the development and implementation of strategies for research for health to grow and contribute towards people’s health and equity. Methods We conducted a survey with the health managers from LAC countries that form part of the Ibero-American Ministerial Network for Health Education and Research. Results From 13 of 18 questionnaires delivered, we obtained information on the NRfHS governance and management structures, the legal and political framework, the research priorities, existing financing schemes, and the main institutional actors. Data on investment in science and technology, scientific production, and on the socio-economic reality of countries were obtained through desk review focused on regional/global data sources to increase comparability. Conclusions By comparing the data gathered with a review carried out in 2008, we were able to document the advances in research for health system development in the region, mostly in setting governance, coordination, policies, and regulations, key for better functionality of research for health systems. However, in spite of these advances, growth and consolidation of research for health systems in the region is still uneven. PMID:24602201

  13. Illicit drug use research in Latin America: epidemiology service use, and HIV.

    PubMed

    Aguilar-Gaxiola, Sergio; Medina-Mora, María Elena; Magaña, Cristina G; Vega, William A; Alejo-Garcia, Christina; Quintanar, Tania Real; Vazquez, Lucía; Ballesteros, Patricia D; Ibarra, Juan; Rosales, Heidi

    2006-09-01

    The purpose of this article is to review the research status of illicit drug use and its data sources in Latin America, with particular attention to the research that has been produced in the past 15 years in epidemiology of illicit drug use services utilization, and relationship between HIV and drug use. This article complements the series of articles that are published in this same volume which examine drug abuse research (epidemiology, prevention, and treatment) and HIV prevention in Latinos residing in the United States. This review resulted from extensive international and national searches using the following databases: Current Contents Connect, Social and Behavioral Sciences; EBSCO; EMBASE(R) Psychiatry; Evidence Based Medicine (through OVID); Medline, Neurosciences, PsychINFO, Pubmed, BIREME/PAHO/WHO--Virtual Health Library, and SciELO. Papers selected for further review included those published in Spanish, English, and Portuguese in peer-reviewed journals. From the evidence reviewed, it was found that the published research literature is heavily concentrated on descriptive epidemiologic surveys, providing primarily prevalence rates and general information on associated factors. Evidence on patterns of service delivery and HIV prevention and treatment is limited. The cumulative scope of this research clearly indicates variability in quantity and quality of research across Latin American nations and the need for greater uniformity in data collection elements, methodologies, and the creation of international collaborative research networks.

  14. [Research on violence against women in Latin America: from blind empiricism to theory without data].

    PubMed

    Castro, Roberto; Riquer, Florinda

    2003-01-01

    Research on violence against women in Latin America presents an interesting paradox: while the number of studies is quite small, there also appears to be a sense that research on this topic has been exhausted, despite the lack of any definitive responses to the nature and causes of the problem. This results from the boom in studies with a strong empirical focus, lacking any basis in more general sociological theory. On the other hand, research using social theory tends to ignore the existing mediations between structural arrangements and any individual specific behavior, as well as the interactive nature of domestic violence. Meanwhile, empirical research presents inconsistent results and tends to run into methodological problems such as operational confusion, contradictory findings, and results and recommendations that are too obvious. New research designs must be developed to enrich the field and which are solidly based on the body of conceptual knowledge in social sciences, abandoning designs without theory and those which are merely statistical. Only then will it be possible to imagine the new research questions that the problem of violence requires.

  15. [Intellectual development disorders in Latin America: a framework for setting policy priorities for research and care].

    PubMed

    Lazcano-Ponce, Eduardo; Katz, Gregorio; Allen-Leigh, Betania; Magaña Valladares, Laura; Rangel-Eudave, Guillermina; Minoletti, Alberto; Wahlberg, Ernesto; Vásquez, Armando; Salvador-Carulla, Luis

    2013-09-01

    Intellectual development disorders (IDDs) are a set of development disorders characterized by significantly limited cognitive functioning, learning disorders, and disorders related to adaptive skills and behavior. Previously grouped under the term "intellectual disability," this problem has not been widely studied or quantified in Latin America. Those affected are absent from public policy and do not benefit from government social development and poverty reduction strategies. This article offers a critical look at IDDs and describes a new taxonomy; it also proposes recognizing IDDs as a public health issue and promoting the professionalization of care, and suggests an agenda for research and regional action. In Latin America there is no consensus on the diagnostic criteria for IDDs. A small number of rehabilitation programs cover a significant proportion of the people who suffer from IDDs, evidence-based services are not offered, and health care guidelines have not been evaluated. Manuals on psychiatric diagnosis focus heavily on identifying serious IDDs and contribute to underreporting and erroneous classification. The study of these disorders has not been a legal, social science, or public health priority, resulting in a dearth of scientific evidence on them. Specific competencies and professionalization of care for these persons are needed, and interventions must be carried out with a view to prevention, rehabilitation, community integration, and inclusion in the work force.

  16. The NAS Parallel Benchmarks

    SciTech Connect

    Bailey, David H.

    2009-11-15

    The NAS Parallel Benchmarks (NPB) are a suite of parallel computer performance benchmarks. They were originally developed at the NASA Ames Research Center in 1991 to assess high-end parallel supercomputers. Although they are no longer used as widely as they once were for comparing high-end system performance, they continue to be studied and analyzed a great deal in the high-performance computing community. The acronym 'NAS' originally stood for the Numerical Aeronautical Simulation Program at NASA Ames. The name of this organization was subsequently changed to the Numerical Aerospace Simulation Program, and more recently to the NASA Advanced Supercomputing Center, although the acronym remains 'NAS.' The developers of the original NPB suite were David H. Bailey, Eric Barszcz, John Barton, David Browning, Russell Carter, LeoDagum, Rod Fatoohi, Samuel Fineberg, Paul Frederickson, Thomas Lasinski, Rob Schreiber, Horst Simon, V. Venkatakrishnan and Sisira Weeratunga. The original NAS Parallel Benchmarks consisted of eight individual benchmark problems, each of which focused on some aspect of scientific computing. The principal focus was in computational aerophysics, although most of these benchmarks have much broader relevance, since in a much larger sense they are typical of many real-world scientific computing applications. The NPB suite grew out of the need for a more rational procedure to select new supercomputers for acquisition by NASA. The emergence of commercially available highly parallel computer systems in the late 1980s offered an attractive alternative to parallel vector supercomputers that had been the mainstay of high-end scientific computing. However, the introduction of highly parallel systems was accompanied by a regrettable level of hype, not only on the part of the commercial vendors but even, in some cases, by scientists using the systems. As a result, it was difficult to discern whether the new systems offered any fundamental performance advantage

  17. [Benchmarking in health care: conclusions and recommendations].

    PubMed

    Geraedts, Max; Selbmann, Hans-Konrad

    2011-01-01

    The German Health Ministry funded 10 demonstration projects and accompanying research of benchmarking in health care. The accompanying research work aimed to infer generalisable findings and recommendations. We performed a meta-evaluation of the demonstration projects and analysed national and international approaches to benchmarking in health care. It was found that the typical benchmarking sequence is hardly ever realised. Most projects lack a detailed analysis of structures and processes of the best performers as a starting point for the process of learning from and adopting best practice. To tap the full potential of benchmarking in health care, participation in voluntary benchmarking projects should be promoted that have been demonstrated to follow all the typical steps of a benchmarking process.

  18. The NAS parallel benchmarks

    NASA Technical Reports Server (NTRS)

    Bailey, David (Editor); Barton, John (Editor); Lasinski, Thomas (Editor); Simon, Horst (Editor)

    1993-01-01

    A new set of benchmarks was developed for the performance evaluation of highly parallel supercomputers. These benchmarks consist of a set of kernels, the 'Parallel Kernels,' and a simulated application benchmark. Together they mimic the computation and data movement characteristics of large scale computational fluid dynamics (CFD) applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification - all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.

  19. Improving mental and neurological health research in Latin America: a qualitative study

    PubMed Central

    Fiestas, Fabián; Gallo, Carla; Poletti, Giovanni; Bustamante, Inés; Alarcón, Renato D; Mari, Jair J; Razzouk, Denise; Olifson, Sylvie; Mazzotti, Guido

    2009-01-01

    Background Research evidence is essential to inform policies, interventions and programs, and yet research activities in mental and neurological (MN) health have been largely neglected, particularly in low- and middle-income countries. Many challenges have been identified in the production and utilization of research evidence in Latin American countries, and more work is needed to overcome this disadvantageous situation. This study aims to address the situation by identifying initiatives that could improve MN health research activities and implementation of their results in the Latin American region. Methods Thirty-four MN health actors from 13 Latin American countries were interviewed as part of an initiative by the Global Forum for Health Research and the World Health Organization to explore the status of MN health research in low- and middle-income countries in Africa, Asia and Latin-America. Results A variety of recommendations to increase MN health research activities and implementation of their results emerged in the interviews. These included increasing skilled human resources in MN health interventions and research, fostering greater participation of stakeholders in the generation of research topics and projects, and engendering the interest of national and international institutions in important MN health issues and research methodologies. In the view of most participants, government agencies should strive to have research results inform the decision-making process in which they are involved. Thus these agencies would play a key role in facilitating and funding research. Participants also pointed to the importance of academic recognition and financial rewards in attracting professionals to primary and translational research in MN health. In addition, they suggested that institutions should create intramural resources to provide researchers with technical support in designing, carrying out and disseminating research, including resources to improve

  20. Worker health and safety and climate change in the Americas: issues and research needs.

    PubMed

    Kiefer, Max; Rodríguez-Guzmán, Julietta; Watson, Joanna; van Wendel de Joode, Berna; Mergler, Donna; da Silva, Agnes Soares

    2016-09-01

    SYNOPSIS This report summarizes and discusses current knowledge on the impact that climate change can have on occupational safety and health (OSH), with a particular focus on the Americas. Worker safety and health issues are presented on topics related to specific stressors (e.g., temperature extremes), climate associated impacts (e.g., ice melt in the Arctic), and a health condition associated with climate change (chronic kidney disease of non-traditional etiology). The article discusses research needs, including hazards, surveillance, and risk assessment activities to better characterize and understand how OSH may be associated with climate change events. Also discussed are the actions that OSH professionals can take to ensure worker health and safety in the face of climate change.

  1. Asian Indian Culture in America: A Bibliography of Research Documents. A Research Report.

    ERIC Educational Resources Information Center

    Mohapatra, Urmila

    This bibliography has been prepared as a research tool for scholars who want to conduct studies about Asian Indian Americans. Only a few published works on Asian Indian Americans are available in book length; most are journal articles, monographs, research reports, dissertations and theses, newspaper articles, and unpublished manuscripts. Works…

  2. The IAEA Coordinated Research Program on HTGR Reactor Physics, Thermal-hydraulics and Depletion Uncertainty Analysis: Description of the Benchmark Test Cases and Phases

    SciTech Connect

    Frederik Reitsma; Gerhard Strydom; Bismark Tyobeka; Kostadin Ivanov

    2012-10-01

    The continued development of High Temperature Gas Cooled Reactors (HTGRs) requires verification of design and safety features with reliable high fidelity physics models and robust, efficient, and accurate codes. The uncertainties in the HTR analysis tools are today typically assessed with sensitivity analysis and then a few important input uncertainties (typically based on a PIRT process) are varied in the analysis to find a spread in the parameter of importance. However, one wish to apply a more fundamental approach to determine the predictive capability and accuracies of coupled neutronics/thermal-hydraulics and depletion simulations used for reactor design and safety assessment. Today there is a broader acceptance of the use of uncertainty analysis even in safety studies and it has been accepted by regulators in some cases to replace the traditional conservative analysis. Finally, there is also a renewed focus in supplying reliable covariance data (nuclear data uncertainties) that can then be used in uncertainty methods. Uncertainty and sensitivity studies are therefore becoming an essential component of any significant effort in data and simulation improvement. In order to address uncertainty in analysis and methods in the HTGR community the IAEA launched a Coordinated Research Project (CRP) on the HTGR Uncertainty Analysis in Modelling early in 2012. The project is built on the experience of the OECD/NEA Light Water Reactor (LWR) Uncertainty Analysis in Best-Estimate Modelling (UAM) benchmark activity, but focuses specifically on the peculiarities of HTGR designs and its simulation requirements. Two benchmark problems were defined with the prismatic type design represented by the MHTGR-350 design from General Atomics (GA) while a 250 MW modular pebble bed design, similar to the INET (China) and indirect-cycle PBMR (South Africa) designs are also included. In the paper more detail on the benchmark cases, the different specific phases and tasks and the latest

  3. Closed-Loop Neuromorphic Benchmarks

    PubMed Central

    Stewart, Terrence C.; DeWolf, Travis; Kleinhans, Ashley; Eliasmith, Chris

    2015-01-01

    Evaluating the effectiveness and performance of neuromorphic hardware is difficult. It is even more difficult when the task of interest is a closed-loop task; that is, a task where the output from the neuromorphic hardware affects some environment, which then in turn affects the hardware's future input. However, closed-loop situations are one of the primary potential uses of neuromorphic hardware. To address this, we present a methodology for generating closed-loop benchmarks that makes use of a hybrid of real physical embodiment and a type of “minimal” simulation. Minimal simulation has been shown to lead to robust real-world performance, while still maintaining the practical advantages of simulation, such as making it easy for the same benchmark to be used by many researchers. This method is flexible enough to allow researchers to explicitly modify the benchmarks to identify specific task domains where particular hardware excels. To demonstrate the method, we present a set of novel benchmarks that focus on motor control for an arbitrary system with unknown external forces. Using these benchmarks, we show that an error-driven learning rule can consistently improve motor control performance across a randomly generated family of closed-loop simulations, even when there are up to 15 interacting joints to be controlled. PMID:26696820

  4. Latin America.

    ERIC Educational Resources Information Center

    Soni, P. Sarita, Ed.

    1993-01-01

    This serial issue features 6 members of the Indiana University System faculty who have focused their research on Latin America, past and present. The first article, "A Literature of Their Own," highlights Darlene Sadlier's research on Brazilian women's fiction and poetry that has led to an interest in the interplay of Brazilian and…

  5. Benchmarking: The New Tool.

    ERIC Educational Resources Information Center

    Stralser, Steven

    1995-01-01

    This article suggests that benchmarking, the process of comparing one's own operation with the very best, can be used to make improvements in colleges and universities. Six steps are outlined: determining what to benchmark, forming a team, discovering who to benchmark, collecting and analyzing data, using the data to redesign one's own operation,…

  6. Benchmarking for Higher Education.

    ERIC Educational Resources Information Center

    Jackson, Norman, Ed.; Lund, Helen, Ed.

    The chapters in this collection explore the concept of benchmarking as it is being used and developed in higher education (HE). Case studies and reviews show how universities in the United Kingdom are using benchmarking to aid in self-regulation and self-improvement. The chapters are: (1) "Introduction to Benchmarking" (Norman Jackson…

  7. The Intra-Americas Sea low-level jet: overview and future research.

    PubMed

    Amador, Jorge A

    2008-12-01

    A relevant climate feature of the Intra-Americas Sea (IAS) is the low-level jet (IALLJ) dominating the IAS circulation, both in summer and winter; and yet it is practically unknown with regard to its nature, structure, interactions with mid-latitude and tropical phenomena, and its role in regional weather and climate. This paper updates IALLJ current knowledge and its contribution to IAS circulation-precipitation patterns and presents recent findings about the IALLJ based on first in situ observations during Phase 3 of the Experimento Climático en las Albercas de Agua Cálida (ECAC), an international field campaign to study IALLJ dynamics during July 2001. Nonhydrostatic fifth-generation Pennsylvania State University National Center for Atmospheric Research Mesoscale Model (MM5) simulations were compared with observations and reanalysis. Large-scale circulation patterns of the IALLJ northern hemisphere summer and winter components suggest that trades, and so the IALLJ, are responding to land-ocean thermal contrasts during the summer season of each continent. The IALLJ is a natural component of the American monsoons as a result of the continent's approximate north-south land distribution. During warm (cold) El Niño-Southern Oscillation phases, winds associated with the IALLJ core (IALLJC) are stronger (weaker) than normal, so precipitation anomalies are positive (negative) in the western Caribbean near Central America and negative (positive) in the central IAS. During the ECAC Phase 3, strong surface winds associated with the IALLJ induced upwelling, cooling down the sea surface temperature by 1-2 degrees C. The atmospheric mixed layer height reached 1 km near the surface wind maximum below the IALLJC. Observations indicate that primary water vapor advection takes place in a shallow layer between the IALLJC and the ocean surface. Latent heat flux peaked below the IALLJC. Neither the reanalysis nor MM5 captured the observed thermodynamic and kinematic IALLJ

  8. Iraq: Politics, Elections, and Benchmarks

    DTIC Science & Technology

    2009-12-08

    Politics, Elections, and Benchmarks Congressional Research Service 2 Kirkuk ( Tamim province) will join the Kurdish region (Article 140); designation...security control over areas inhabited by Kurds, and the Kurds’ claim that the province of Tamim (Kirkuk) be formally integrated into the KRG. These

  9. Iraq: Politics, Elections, and Benchmarks

    DTIC Science & Technology

    2010-01-15

    referendum on whether . Iraq: Politics, Elections, and Benchmarks Congressional Research Service 2 Kirkuk ( Tamim province) would join the Kurdish...areas inhabited by Kurds, and the Kurds’ claim that the province of Tamim (Kirkuk) be formally integrated into the KRG. These disputes were aggravated

  10. Iraq: Politics, Elections, and Benchmarks

    DTIC Science & Technology

    2010-04-28

    Politics, Elections, and Benchmarks Congressional Research Service 2 Kirkuk ( Tamim province) would join the Kurdish region (Article 140...18 Maliki: 8; INA: 9; Iraqiyya: 1 Sulaymaniyah 17 Kurdistan Alliance: 8; other Kurds: 9 Kirkuk ( Tamim ) 12 Iraqiyya: 6; Kurdistan Alliance: 6

  11. Iraq: Politics, Elections, and Benchmarks

    DTIC Science & Technology

    2009-10-21

    Benchmarks Congressional Research Service 2 Kirkuk ( Tamim province) will join the Kurdish region (Article 140); designation of Islam as “a main source” of...security control over areas inhabited by Kurds, and the Kurds’ claim that the province of Tamim (Kirkuk) be formally integrated into the KRG. These

  12. Biogeochemical Research Priorities for Sustainable Biofuel and Bioenergy Feedstock Production in the Americas

    NASA Astrophysics Data System (ADS)

    Gollany, Hero T.; Titus, Brian D.; Scott, D. Andrew; Asbjornsen, Heidi; Resh, Sigrid C.; Chimner, Rodney A.; Kaczmarek, Donald J.; Leite, Luiz F. C.; Ferreira, Ana C. C.; Rod, Kenton A.; Hilbert, Jorge; Galdos, Marcelo V.; Cisz, Michelle E.

    2015-12-01

    Rapid expansion in biomass production for biofuels and bioenergy in the Americas is increasing demand on the ecosystem resources required to sustain soil and site productivity. We review the current state of knowledge and highlight gaps in research on biogeochemical processes and ecosystem sustainability related to biomass production. Biomass production systems incrementally remove greater quantities of organic matter, which in turn affects soil organic matter and associated carbon and nutrient storage (and hence long-term soil productivity) and off-site impacts. While these consequences have been extensively studied for some crops and sites, the ongoing and impending impacts of biomass removal require management strategies for ensuring that soil properties and functions are sustained for all combinations of crops, soils, sites, climates, and management systems, and that impacts of biomass management (including off-site impacts) are environmentally acceptable. In a changing global environment, knowledge of cumulative impacts will also become increasingly important. Long-term experiments are essential for key crops, soils, and management systems because short-term results do not necessarily reflect long-term impacts, although improved modeling capability may help to predict these impacts. Identification and validation of soil sustainability indicators for both site prescriptions and spatial applications would better inform commercial and policy decisions. In an increasingly inter-related but constrained global context, researchers should engage across inter-disciplinary, inter-agency, and international lines to better ensure the long-term soil productivity across a range of scales, from site to landscape.

  13. Biogeochemical Research Priorities for Sustainable Biofuel and Bioenergy Feedstock Production in the Americas.

    PubMed

    Gollany, Hero T; Titus, Brian D; Scott, D Andrew; Asbjornsen, Heidi; Resh, Sigrid C; Chimner, Rodney A; Kaczmarek, Donald J; Leite, Luiz F C; Ferreira, Ana C C; Rod, Kenton A; Hilbert, Jorge; Galdos, Marcelo V; Cisz, Michelle E

    2015-12-01

    Rapid expansion in biomass production for biofuels and bioenergy in the Americas is increasing demand on the ecosystem resources required to sustain soil and site productivity. We review the current state of knowledge and highlight gaps in research on biogeochemical processes and ecosystem sustainability related to biomass production. Biomass production systems incrementally remove greater quantities of organic matter, which in turn affects soil organic matter and associated carbon and nutrient storage (and hence long-term soil productivity) and off-site impacts. While these consequences have been extensively studied for some crops and sites, the ongoing and impending impacts of biomass removal require management strategies for ensuring that soil properties and functions are sustained for all combinations of crops, soils, sites, climates, and management systems, and that impacts of biomass management (including off-site impacts) are environmentally acceptable. In a changing global environment, knowledge of cumulative impacts will also become increasingly important. Long-term experiments are essential for key crops, soils, and management systems because short-term results do not necessarily reflect long-term impacts, although improved modeling capability may help to predict these impacts. Identification and validation of soil sustainability indicators for both site prescriptions and spatial applications would better inform commercial and policy decisions. In an increasingly inter-related but constrained global context, researchers should engage across inter-disciplinary, inter-agency, and international lines to better ensure the long-term soil productivity across a range of scales, from site to landscape.

  14. Identifying Barriers and Practical Solutions to Conducting Site-Based Research in North America: Exploring Acute Heart Failure Trials As a Case Study.

    PubMed

    Ambrosy, Andrew P; Mentz, Robert J; Krishnamoorthy, Arun; Greene, Stephen J; Severance, Harry W

    2015-10-01

    Although the prognosis of ambulatory heart failure (HF) has improved dramatically there have been few advances in the management of acute HF (AHF). Despite regional differences in patient characteristics, background therapy, and event rates, AHF clinical trial enrollment has transitioned from North America and Western Europe to Eastern Europe, South America, and Asia-Pacific where regulatory burden and cost of conducting research may be less prohibitive. It is unclear if the results of clinical trials conducted outside of North America are generalizable to US patient populations. This article uses AHF as a paradigm and identifies barriers and practical solutions to successfully conducting site-based research in North America.

  15. A review of bioinformatics training applied to research in molecular medicine, agriculture and biodiversity in Costa Rica and Central America.

    PubMed

    Orozco, Allan; Morera, Jessica; Jiménez, Sergio; Boza, Ricardo

    2013-09-01

    Today, Bioinformatics has become a scientific discipline with great relevance for the Molecular Biosciences and for the Omics sciences in general. Although developed countries have progressed with large strides in Bioinformatics education and research, in other regions, such as Central America, the advances have occurred in a gradual way and with little support from the Academia, either at the undergraduate or graduate level. To address this problem, the University of Costa Rica's Medical School, a regional leader in Bioinformatics in Central America, has been conducting a series of Bioinformatics workshops, seminars and courses, leading to the creation of the region's first Bioinformatics Master's Degree. The recent creation of the Central American Bioinformatics Network (BioCANET), associated to the deployment of a supporting computational infrastructure (HPC Cluster) devoted to provide computing support for Molecular Biology in the region, is providing a foundational stone for the development of Bioinformatics in the area. Central American bioinformaticians have participated in the creation of as well as co-founded the Iberoamerican Bioinformatics Society (SOIBIO). In this article, we review the most recent activities in education and research in Bioinformatics from several regional institutions. These activities have resulted in further advances for Molecular Medicine, Agriculture and Biodiversity research in Costa Rica and the rest of the Central American countries. Finally, we provide summary information on the first Central America Bioinformatics International Congress, as well as the creation of the first Bioinformatics company (Indromics Bioinformatics), spin-off the Academy in Central America and the Caribbean.

  16. An overview on the Space Weather in Latin America: from Space Research to Space Weather and its Forecast

    NASA Astrophysics Data System (ADS)

    De Nardin, C. M.; Gonzalez-Esparza, A.; Dasso, S.

    2015-12-01

    We present an overview on the Space Weather in Latin America, highlighting the main findings from our review the recent advances in the space science investigations in Latin America focusing in the solar-terrestrial interactions, modernly named space weather, which leaded to the creation of forecast centers. Despite recognizing advances in the space research over the whole Latin America, this review is restricted to the evolution observed in three countries (Argentina, Brazil and Mexico) only, due to the fact that these countries have recently developed operational center for monitoring the space weather. The work starts with briefly mentioning the first groups that started the space science in Latin America. The current status and research interest of such groups are then described together with the most referenced works and the challenges for the next decade to solve space weather puzzles. A small inventory of the networks and collaborations being built is also described. Finally, the decision process for spinning off the space weather prediction centers from the space science groups is reported with an interpretation of the reason/opportunities that lead to it. Lastly, the constraints for the progress in the space weather monitoring, research, and forecast are listed with recommendations to overcome them.

  17. Assessment of a Merged Research and Education Program in Pacific Latin America

    NASA Astrophysics Data System (ADS)

    Bluth, G. J.; Gierke, J. S.; Gross, E. L.; Kieckhafer, P. B.; Rose, W. I.

    2006-12-01

    The ultimate goal of integrating research with education is to encourage cross-disciplinary, creative, and critical thinking in problem solving and foster the ability to deal with uncertainty in analyzing problems and designing appropriate solutions. The National Science Foundation (NSF) is actively promoting these kinds of programs, in particular in conjunction with international collaboration. With NSF support, we are building a new educational system of applied research and engineering, using two existing programs at Michigan Tech: a Peace Corp/Master's International (PC/MI) program in Natural Hazards which features a 2-year field assignment, and an "Enterprise" program for undergraduates, which gives teams of geoengineering students the opportunity to work for three years in a business-like setting to solve real-world problems. This project involves 2 post-doctoral researchers, 3-5 Ph.D. and Master's, 5-10 PC/MI graduate students, and roughly 20 undergraduate students each year. The assessment of this project involves measurement of participant perceptions and motivations towards working in Pacific Latin America (Ecuador, El Salvador, Guatemala and Nicaragua), and tracking the changes as the participants complete academic and field aspects of this program. As the participants progress through their projects and Peace Corps assignments, we also get insights into the type of academic preparation best suited for international geoscience collaboration and it is not always a matter of technical knowledge. As a result, we are modifying existing courses in hazard communication, as well as developing a new course focusing on the geology of these regions taught through weekly contributions by an international team of researchers. Other efforts involve multi-university, web-based courses in critical technical topics such as volcano seismology, which because of their complex, cross-disciplinary nature are difficult to sustain from a single institution.

  18. Benchmarking foreign electronics technologies

    SciTech Connect

    Bostian, C.W.; Hodges, D.A.; Leachman, R.C.; Sheridan, T.B.; Tsang, W.T.; White, R.M.

    1994-12-01

    This report has been drafted in response to a request from the Japanese Technology Evaluation Center`s (JTEC) Panel on Benchmarking Select Technologies. Since April 1991, the Competitive Semiconductor Manufacturing (CSM) Program at the University of California at Berkeley has been engaged in a detailed study of quality, productivity, and competitiveness in semiconductor manufacturing worldwide. The program is a joint activity of the College of Engineering, the Haas School of Business, and the Berkeley Roundtable on the International Economy, under sponsorship of the Alfred P. Sloan Foundation, and with the cooperation of semiconductor producers from Asia, Europe and the United States. Professors David A. Hodges and Robert C. Leachman are the project`s Co-Directors. The present report for JTEC is primarily based on data and analysis drawn from that continuing program. The CSM program is being conducted by faculty, graduate students and research staff from UC Berkeley`s Schools of Engineering and Business, and Department of Economics. Many of the participating firms are represented on the program`s Industry Advisory Board. The Board played an important role in defining the research agenda. A pilot study was conducted in 1991 with the cooperation of three semiconductor plants. The research plan and survey documents were thereby refined. The main phase of the CSM benchmarking study began in mid-1992 and will continue at least through 1997. reports are presented on the manufacture of integrated circuits; data storage; wireless technology; human-machine interfaces; and optoelectronics. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.

  19. Responding to Agenda 2020: A technology vision and research agenda for America`s forest, wood and paper industry

    SciTech Connect

    Lang, K.S.

    1995-03-01

    This document presents project summaries that demonstrate specific capabilities of interest to the forest, wood and paper industry in areas where PNL offers significant depth of experience or unique expertise. Though PNL possesses a wide range of capabilities across many of the technology-related issues identified by the industry, this document focuses on capabilities that meet the specific forest, wood and paper industry needs of the following research areas: forest inventory; human and environmental effects; energy and environmental tradeoffs; reduction of impacts of liquid effluent; solid wastes; removal of non-process elements in pulp and paper operations; life cycle assessment; and process measurement and controls. In addition, PNL can provide the forest, wood and paper industry with support in areas such as strategic and program planning, stakeholder communications and outreach, budget defense and quality metrics. These are services PNL provides directly to several programs within DOE.

  20. Public Relations in Accounting: A Benchmark Study.

    ERIC Educational Resources Information Center

    Pincus, J. David; Pincus, Karen V.

    1987-01-01

    Reports on a national study of one segment of the professional services market: the accounting profession. Benchmark data on CPA firms' attitudes toward and uses of public relations are presented and practical and theoretical/research issues are discussed. (JC)

  1. A Standard-Setting Study to Establish College Success Criteria to Inform the SAT® College and Career Readiness Benchmark. Research Report 2012-3

    ERIC Educational Resources Information Center

    Kobrin, Jennifer L.; Patterson, Brian F.; Wiley, Andrew; Mattern, Krista D.

    2012-01-01

    In 2011, the College Board released its SAT college and career readiness benchmark, which represents the level of academic preparedness associated with a high likelihood of college success and completion. The goal of this study, which was conducted in 2008, was to establish college success criteria to inform the development of the benchmark. The…

  2. Supporting the use of research evidence in the Americas through an online "one-stop shop": the EVIPNet VHL.

    PubMed

    Moat, K A; Lavis, J N

    2014-12-01

    Since the release of the 'World Report on Knowledge for Better Health' in 2004, a transformation has occurred in the field of health policy and systems research that has brought with it an increased emphasis on supporting the use of research evidence in the policy process. There has been an identified need for comprehensive online "one-stop shops" that facilitate the timely retrieval of research evidence in the policy process. This report highlights the EVIPNet VHL, a recently established project that was developed to meet the need for online repositories of relevant evidence to support knowledge translation efforts in the Americas, which can help contribute to strengthening health systems in the region.

  3. Thermal Performance Benchmarking (Presentation)

    SciTech Connect

    Moreno, G.

    2014-11-01

    This project will benchmark the thermal characteristics of automotive power electronics and electric motor thermal management systems. Recent vehicle systems will be benchmarked to establish baseline metrics, evaluate advantages and disadvantages of different thermal management systems, and identify areas of improvement to advance the state-of-the-art.

  4. Benchmarks in Management Training.

    ERIC Educational Resources Information Center

    Paddock, Susan C.

    1997-01-01

    Data were collected from 12 states with Certified Public Manager training programs to establish benchmarks. The 38 benchmarks were in the following areas: program leadership, stability of administrative/financial support, consistent management philosophy, administrative control, participant selection/support, accessibility, application of…

  5. Verification and validation benchmarks.

    SciTech Connect

    Oberkampf, William Louis; Trucano, Timothy Guy

    2007-02-01

    Verification and validation (V&V) are the primary means to assess the accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several high-consequence fields, such as nuclear reactor safety, underground nuclear waste storage, and nuclear weapon safety. Although the terminology is not uniform across engineering disciplines, code verification deals with assessing the reliability of the software coding, and solution verification deals with assessing the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing the computational results with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. For example, the field of nuclear reactor safety has not focused on code verification benchmarks, but it has placed great emphasis on developing validation benchmarks. Many of these validation benchmarks are closely related to the operations of actual reactors at near-safety-critical conditions, as opposed to being more fundamental-physics benchmarks. This paper presents recommendations for the effective design and use of code verification benchmarks based on manufactured solutions, classical analytical solutions, and highly accurate numerical solutions. In addition, this paper presents recommendations for the design and use of validation benchmarks, highlighting the careful design of building-block experiments, the estimation of experimental measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that the understanding of predictive capability of a computational model is built on the level of

  6. Results Oriented Benchmarking: The Evolution of Benchmarking at NASA from Competitive Comparisons to World Class Space Partnerships

    NASA Technical Reports Server (NTRS)

    Bell, Michael A.

    1999-01-01

    Informal benchmarking using personal or professional networks has taken place for many years at the Kennedy Space Center (KSC). The National Aeronautics and Space Administration (NASA) recognized early on, the need to formalize the benchmarking process for better utilization of resources and improved benchmarking performance. The need to compete in a faster, better, cheaper environment has been the catalyst for formalizing these efforts. A pioneering benchmarking consortium was chartered at KSC in January 1994. The consortium known as the Kennedy Benchmarking Clearinghouse (KBC), is a collaborative effort of NASA and all major KSC contractors. The charter of this consortium is to facilitate effective benchmarking, and leverage the resulting quality improvements across KSC. The KBC acts as a resource with experienced facilitators and a proven process. One of the initial actions of the KBC was to develop a holistic methodology for Center-wide benchmarking. This approach to Benchmarking integrates the best features of proven benchmarking models (i.e., Camp, Spendolini, Watson, and Balm). This cost-effective alternative to conventional Benchmarking approaches has provided a foundation for consistent benchmarking at KSC through the development of common terminology, tools, and techniques. Through these efforts a foundation and infrastructure has been built which allows short duration benchmarking studies yielding results gleaned from world class partners that can be readily implemented. The KBC has been recognized with the Silver Medal Award (in the applied research category) from the International Benchmarking Clearinghouse.

  7. Characteristics of Adults in the Hepatitis B Research Network in North America Reflect Their Country of Origin and HBV Genotype

    PubMed Central

    Ghany, Marc; Perrillo, Robert; Li, Ruosha; Belle, Steven H.; Janssen, Harry L.A.; Terrault, Norah A.; Shuhart, Margaret C.; Lau, Daryl T-Y; Kim, W. Ray; Fried, Michael W.; Sterling, Richard K.; Di Bisceglie, Adrian M.; Han, Steven-Huy B.; Ganova-Raeva, Lilia Milkova; Chang, Kyong-Mi; Suk-Fong Lok, Anna

    2014-01-01

    Background & Aims Chronic hepatitis B virus (HBV) infection is an important cause of cirrhosis and hepatocellular carcinoma worldwide; populations that migrate to the US and Canada might be disproportionately affected. The Hepatitis B Research Network (HBRN) is a cooperative network of investigators from the United States and Canada, created to facilitate clinical, therapeutic, and translational research in adults and children with hepatitis B. We describe the structure of the network and baseline characteristics of adults with hepatitis B enrolled in the network. Methods The HBRN collected data on clinical characteristics of 1625 adults with chronic HBV infection who are not receiving antiviral therapy from 21 clinical centers in North America. Results Half of the subjects in the HBRN are male, and the mean age is 42 years; 72% are Asian, 15% are Black, and 11% are White, with 82% born outside of North America. The most common HBV genotype was B (39%); 745 of subjects were negative for the hepatitis B e antigen. The median serum level of HBV DNA when the study began was 3.6 log10 IU/mL; 68% of male subjects and 67% of female subjects had levels of alanine aminotransferase above the normal range. Conclusions The HBRN cohort will be used to address important clinical and therapeutic questions for North Americans infected with chronic HBV and to guide health policies on HBV prevention and management in North America. PMID:25010003

  8. Building America Residential System Research Results. Achieving 30% Whole House Energy Savings Level in Hot-Dry and Mixed-Dry Climates

    SciTech Connect

    Anderson, R.; Hendron, R.; Eastment, M.; Jalalzadeh-Azar, A.

    2006-01-01

    This report summarizes Building America research results for the 30% energy savings level and demonstrates that lead builders can successfully provide 30% homes in the Hot-Dry/Mixed-Dry Climate Region on a cost-neutral basis.

  9. The State of Federal Research Funding in Genetics as Reflected by Members of the Genetics Society of America

    PubMed Central

    Rine, Jasper; Fagen, Adam P.

    2015-01-01

    Scientific progress runs on the intellect, curiosity, and passion of its practitioners fueled by the research dollars of its sponsors. The concern over research funding in biology in general and genetics in particular led us to survey the membership of the Genetics Society of America for information about the federal support of genetics at the level of individual principal investigators. The results paint a mosaic of circumstances—some good, others not so good—that describes some of our present challenges with sufficient detail to suggest useful steps that could address the challenges. PMID:26178966

  10. The State of Federal Research Funding in Genetics as Reflected by Members of the Genetics Society of America.

    PubMed

    Rine, Jasper; Fagen, Adam P

    2015-08-01

    Scientific progress runs on the intellect, curiosity, and passion of its practitioners fueled by the research dollars of its sponsors. The concern over research funding in biology in general and genetics in particular led us to survey the membership of the Genetics Society of America for information about the federal support of genetics at the level of individual principal investigators. The results paint a mosaic of circumstances-some good, others not so good-that describes some of our present challenges with sufficient detail to suggest useful steps that could address the challenges.

  11. Neutron Depth Profiling benchmarking and analysis of applications to lithium ion cell electrode and interfacial studies research

    NASA Astrophysics Data System (ADS)

    Whitney, Scott M.

    The role of the lithium ion cell is increasing with great intensity due to global concerns for the decreased use of fossil fuels as well as the growing popularity of portable electronics. With the dramatic increase in demand for these cells follows an outbreak of research to optimize the lithium ion cells in terms of safety, cost, and also performance. The work shown in this dissertation sets out to distinguish the role of Neutron Depth Profiling (NDP) in the expanding research of lithium ion cells. Lithium ions play the primary role in the performance of lithium ion batteries. Moving from anode to cathode, and cathode to anode, the lithium ions are constantly being disturbed during the cell's operation. The ability to accurately determine the lithium's behavior within the electrodes of the cell after different operating conditions is a powerful tool to better understand the faults and advantages of particular electrode compositions and cell designs. NDP has this ability through the profiling of 6Li. This research first validates the ability of The University of Texas NDP (UT-NDP) facility to accurately profile operated lithium ion cell electrodes to a precision within 2% over 10 mum for concentration values, and with a precision for depth measurements within 77 nm. The validation of the UT-NDP system is performed by comparing UT-NDP profiles to those from the NIST-NDP system, from the Secondary Ion Mass Spectrometry (SIMS) technique, and also from Monte Carlo n-Particle (MCNPX) code simulations. All of the comparisons confirmed that the UT-NDP facility is fully capable of providing accurate depth profiles of lithium ion cell electrodes in terms of depth, shape of distribution, and concentration. Following the validation studies, this research investigates three different areas of lithium ion cell research and provides analysis based on NDP results. The three areas of investigation include storage of cells at temperature, cycling of cells, and the charging of cells

  12. Building America Industrialized Housing Partnership (BAIHP)

    SciTech Connect

    McIlvaine, Janet; Chandra, Subrato; Barkaszi, Stephen; Beal, David; Chasar, David; Colon, Carlos; Fonorow, Ken; Gordon, Andrew; Hoak, David; Hutchinson, Stephanie; Lubliner, Mike; Martin, Eric; McCluney, Ross; McGinley, Mark; McSorley, Mike; Moyer, Neil; Mullens, Mike; Parker, Danny; Sherwin, John; Vieira, Rob; Wichers, Susan

    2006-06-30

    This final report summarizes the work conducted by the Building America Industrialized Housing Partnership (www.baihp.org) for the period 9/1/99-6/30/06. BAIHP is led by the Florida Solar Energy Center of the University of Central Florida and focuses on factory built housing. In partnership with over 50 factory and site builders, work was performed in two main areas--research and technical assistance. In the research area--through site visits in over 75 problem homes, we discovered the prime causes of moisture problems in some manufactured homes and our industry partners adopted our solutions to nearly eliminate this vexing problem. Through testing conducted in over two dozen housing factories of six factory builders we documented the value of leak free duct design and construction which was embraced by our industry partners and implemented in all the thousands of homes they built. Through laboratory test facilities and measurements in real homes we documented the merits of 'cool roof' technologies and developed an innovative night sky radiative cooling concept currently being tested. We patented an energy efficient condenser fan design, documented energy efficient home retrofit strategies after hurricane damage, developed improved specifications for federal procurement for future temporary housing, compared the Building America benchmark to HERS Index and IECC 2006, developed a toolkit for improving the accuracy and speed of benchmark calculations, monitored the field performance of over a dozen prototype homes and initiated research on the effectiveness of occupancy feedback in reducing household energy use. In the technical assistance area we provided systems engineering analysis, conducted training, testing and commissioning that have resulted in over 128,000 factory built and over 5,000 site built homes which are saving their owners over $17,000,000 annually in energy bills. These include homes built by Palm Harbor Homes, Fleetwood, Southern Energy Homes

  13. Benchmarking expert system tools

    NASA Technical Reports Server (NTRS)

    Riley, Gary

    1988-01-01

    As part of its evaluation of new technologies, the Artificial Intelligence Section of the Mission Planning and Analysis Div. at NASA-Johnson has made timing tests of several expert system building tools. Among the production systems tested were Automated Reasoning Tool, several versions of OPS5, and CLIPS (C Language Integrated Production System), an expert system builder developed by the AI section. Also included in the test were a Zetalisp version of the benchmark along with four versions of the benchmark written in Knowledge Engineering Environment, an object oriented, frame based expert system tool. The benchmarks used for testing are studied.

  14. Toxicological Benchmarks for Wildlife

    SciTech Connect

    Sample, B.E. Opresko, D.M. Suter, G.W.

    1993-01-01

    Ecological risks of environmental contaminants are evaluated by using a two-tiered process. In the first tier, a screening assessment is performed where concentrations of contaminants in the environment are compared to no observed adverse effects level (NOAEL)-based toxicological benchmarks. These benchmarks represent concentrations of chemicals (i.e., concentrations presumed to be nonhazardous to the biota) in environmental media (water, sediment, soil, food, etc.). While exceedance of these benchmarks does not indicate any particular level or type of risk, concentrations below the benchmarks should not result in significant effects. In practice, when contaminant concentrations in food or water resources are less than these toxicological benchmarks, the contaminants may be excluded from further consideration. However, if the concentration of a contaminant exceeds a benchmark, that contaminant should be retained as a contaminant of potential concern (COPC) and investigated further. The second tier in ecological risk assessment, the baseline ecological risk assessment, may use toxicological benchmarks as part of a weight-of-evidence approach (Suter 1993). Under this approach, based toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. Other sources of evidence include media toxicity tests, surveys of biota (abundance and diversity), measures of contaminant body burdens, and biomarkers. This report presents NOAEL- and lowest observed adverse effects level (LOAEL)-based toxicological benchmarks for assessment of effects of 85 chemicals on 9 representative mammalian wildlife species (short-tailed shrew, little brown bat, meadow vole, white-footed mouse, cottontail rabbit, mink, red fox, and whitetail deer) or 11 avian wildlife species (American robin, rough-winged swallow, American woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, barn owl, Cooper's hawk, and red-tailed hawk

  15. The Flora North America Generalized System for Describing the Morphology of Organisms. Research Report No. 4.

    ERIC Educational Resources Information Center

    Shetler, Stanwyn G.

    The file organization for the computerized Flora North America (FNA) data bank is described. A four level character hierarchy allows subdivision of any flora description into as many as four levels in order to specify plant character precisely. Terms at any one level will not necessarily be parallel in status. Both PLANTS and LEAVES serve as…

  16. The Diverse Social and Economic Structure of Nonmetropolitan America. Rural Development Research Report No. 49.

    ERIC Educational Resources Information Center

    Bender, Lloyd D.; And Others

    Effective rural development planning depends on facts and analysis based, not on rural averages, but on the diverse social and economic structure of rural America. Programs tailored to particular types of rural economies may be more effective than generalized programs. Because of their unique characteristics, government policies and economic…

  17. DOE Commercial Building Benchmark Models: Preprint

    SciTech Connect

    Torcelini, P.; Deru, M.; Griffith, B.; Benne, K.; Halverson, M.; Winiarski, D.; Crawley, D. B.

    2008-07-01

    To provide a consistent baseline of comparison and save time conducting such simulations, the U.S. Department of Energy (DOE) has developed a set of standard benchmark building models. This paper will provide an executive summary overview of these benchmark buildings, and how they can save building analysts valuable time. Fully documented and implemented to use with the EnergyPlus energy simulation program, the benchmark models are publicly available and new versions will be created to maintain compatibility with new releases of EnergyPlus. The benchmark buildings will form the basis for research on specific building technologies, energy code development, appliance standards, and measurement of progress toward DOE energy goals. Having a common starting point allows us to better share and compare research results and move forward to make more energy efficient buildings.

  18. Gnss Geodetic Monitoring as Support of Geodynamics Research in Colombia, South America

    NASA Astrophysics Data System (ADS)

    Mora-Paez, H.; Acero-Patino, N.; Rodriguez-Zuluaga, J. S.; Diederix, H.; Bohorquez-Orozco, O. P.; Martinez-Diaz, G. P.; Diaz-Mila, F.; Giraldo-Londono, L. S.; Cardozo-Giraldo, S.; Vasquez-Ospina, A. F.; Lizarazo, S. C.

    2013-05-01

    To support the geodynamics research at the northwestern corner of South America, GEORED, the acronym for "Geodesia: Red de Estudios de Deformación" has been adopted for the Project "Implementation of the National GNSS Network for Geodynamics" carried out by the Colombian Geological Survey, (SGC), formerly INGEOMINAS. Beginning in 2007, discussions within the GEORED group led to a master plan for the distribution of the base permanent GPS/GNSS station array and specific areas of interest for campaign site construction. The use of previously identified active faults as preferred structures along which stresses are transferred through the deformational area led to the idea of segmentation of the North Andes within Colombia into 20 tectonic sub-blocks. Each of the 20 sub-blocks is expected to have, at least, three-four permanent GPS/GNSS stations within the block along with construction of campaign sites along the boundaries. Currently, the GEORED Network is managing 46 continuously including: 40 GEORED GPS/GNSS continuously operating stations; 4 GNSS continuously operating stations provided by the COCONet (Continuously Operating Caribbean GPS Observational Network) Project; the Bogotá IGS GPS station (BOGT), installed in 1994 under the agreement between JPL-NASA and the SGC; and the San Andres Island station, installed in 2007 under the MOU between UCAR and the SGC. In addition to the permanent installations, more than 230 GPS campaign sites have been constructed and are being occupied one time per year. The Authority of the Panama Canal and the Escuela Politecnica de Quito have also provided data of 4 and 5 GPS/GNSS stations respectively. The GPS data are processed using the GIPSY-OASIS II software, and the GPS time series of daily station positions give fundamental information for both regional and local geodynamics studies. Until now, we have obtained 100 quality vector velocities for Colombia, 23 of them as part of the permanent network. The GPS/GNSS stations

  19. Benchmarking Attosecond Physics with Atomic Hydrogen

    DTIC Science & Technology

    2015-05-25

    Final 3. DATES COVERED (From - To) 12 Mar 12 – 11 Mar 15 4. TITLE AND SUBTITLE Benchmarking attosecond physics with atomic hydrogen 5a...NOTES 14. ABSTRACT The research team obtained uniquely reliable reference data on atomic interactions with intense few-cycle laser pulses...AND SUBTITLE Benchmarking attosecond physics with atomic hydrogen 5a. CONTRACT NUMBER FA2386-12-1-4025 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER

  20. Diagnostic Algorithm Benchmarking

    NASA Technical Reports Server (NTRS)

    Poll, Scott

    2011-01-01

    A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.

  1. TWODANT benchmark. Progress report

    SciTech Connect

    Lee, Sung

    1994-01-11

    TWODANT (Two-Dimensional, Diffusion-Accelerated, Neutral-Particle Transport) code has been benchmarked against 6 critical experiments (Jezebel plutonium critical assembly) and their k effective values compared with those of KENO and MCNP codes.

  2. Benchmarking TENDL-2012

    NASA Astrophysics Data System (ADS)

    van der Marck, S. C.; Koning, A. J.; Rochman, D. A.

    2014-04-01

    The new release of the TENDL nuclear data library, TENDL-2012, was tested by performing many benchmark calculations. Close to 2000 criticality safety benchmark cases were used, as well as many benchmark shielding cases. All the runs could be compared with similar runs based on the nuclear data libraries ENDF/B-VII.1, JENDL-4.0, and JEFF-3.1.1 respectively. The results are that many of the criticality safety results obtained with TENDL-2012 are close to the ones for the other libraries. In particular the results for the thermal spectrum cases with LEU fuel are good. Nevertheless, there is a fair amount of cases for which the TENDL-2012 results are not as good as the other libraries. Especially a number of fast spectrum cases with reflectors are not well described. The results for the shielding benchmarks are mostly similar to the ones for the other libraries. Some isolated cases with differences are identified.

  3. Benchmarking in Foodservice Operations.

    DTIC Science & Technology

    2007-11-02

    51. Lingle JH, Schiemann WA. From balanced scorecard to strategic gauges: Is measurement worth it? Mgt Rev. 1996; 85(3):56-61. 52. Struebing L...studies lasted from nine to twelve months, and could extend beyond that time for numerous reasons (49). Benchmarking was not industrial tourism , a...not simply data comparison, a fad, a means for reducing resources, a quick-fix program, or industrial tourism . Benchmarking was a complete process

  4. Synopsis of the Review on Space Weather in Latin America: Space Science, Research Networks and Space Weather Center

    NASA Astrophysics Data System (ADS)

    Denardini, Clezio Marcos; Dasso, Sergio; Gonzalez-Esparza, Americo

    2016-07-01

    The present work is a synopsis of a three-part review on space weather in Latin America. The first paper (part 1) comprises the evolution of several Latin American institutions investing in space science since the 1960's, focusing on the solar-terrestrial interactions, which today is commonly called space weather. Despite recognizing advances in space research in all of Latin America, this part 1 is restricted to the development observed in three countries in particular (Argentina, Brazil and Mexico), due to the fact that these countries have recently developed operational centers for monitoring space weather. The review starts with a brief summary of the first groups to start working with space science in Latin America. This first part of the review closes with the current status and the research interests of these groups, which are described in relation to the most significant works and challenges of the next decade in order to aid in the solving of space weather open issues. The second paper (part 2) comprises a summary of scientific challenges in space weather research that are considered to be open scientific questions and how they are being addressed in terms of instrumentation by the international community, including the Latin American groups. We also provide an inventory of the networks and collaborations being constructed in Latin America, including details on the data processing, capabilities and a basic description of the resulting variables. These instrumental networks currently used for space science research are gradually being incorporated into the space weather monitoring data pipelines as their data provides key variables for monitoring and forecasting space weather, which allow these centers to monitor space weather and issue warnings and alerts. The third paper (part 3) presents the decision process for the spinning off of space weather prediction centers from space science groups with our interpretation of the reason/opportunities that leads to

  5. Universal coverage with rising healthcare costs; health outcomes research value in decision-making in Latin America.

    PubMed

    Augustovski, Federico; García Martí, Sebastián; Pichon Riviere, Andrés; Rubinstein, Adolfo

    2011-12-01

    This is a short summary of the two plenary sessions held at the International Society for Pharmacoeconomics and Outcomes Research (ISPOR) Latin American Conference in Mexico City (Mexico) in September 2011, with 477 registrants and 235 accepted abstract submissions. The first asked how attainable universal coverage is in the face of rising costs of health technologies; and the second considered the value of health outcomes research to decision-makers. This conference provided a scientific forum where researchers, health technology producers and public and private decision-makers shared their experiences and research in the field of health economic evaluations, health technology assessment and patient-reported outcomes/health-related quality of life studies. It was the third biennial regional meeting in Latin America, the next one being in Buenos Aires (Argentina) in 2013.

  6. Thermal Performance Benchmarking: Annual Report

    SciTech Connect

    Moreno, Gilbert

    2016-04-08

    The goal for this project is to thoroughly characterize the performance of state-of-the-art (SOA) automotive power electronics and electric motor thermal management systems. Information obtained from these studies will be used to: Evaluate advantages and disadvantages of different thermal management strategies; establish baseline metrics for the thermal management systems; identify methods of improvement to advance the SOA; increase the publicly available information related to automotive traction-drive thermal management systems; help guide future electric drive technologies (EDT) research and development (R&D) efforts. The performance results combined with component efficiency and heat generation information obtained by Oak Ridge National Laboratory (ORNL) may then be used to determine the operating temperatures for the EDT components under drive-cycle conditions. In FY15, the 2012 Nissan LEAF power electronics and electric motor thermal management systems were benchmarked. Testing of the 2014 Honda Accord Hybrid power electronics thermal management system started in FY15; however, due to time constraints it was not possible to include results for this system in this report. The focus of this project is to benchmark the thermal aspects of the systems. ORNL's benchmarking of electric and hybrid electric vehicle technology reports provide detailed descriptions of the electrical and packaging aspects of these automotive systems.

  7. Benchmarking local healthcare-associated infections: available benchmarks and interpretation challenges.

    PubMed

    El-Saed, Aiman; Balkhy, Hanan H; Weber, David J

    2013-10-01

    Growing numbers of healthcare facilities are routinely collecting standardized data on healthcare-associated infection (HAI), which can be used not only to track internal performance but also to compare local data to national and international benchmarks. Benchmarking overall (crude) HAI surveillance metrics without accounting or adjusting for potential confounders can result in misleading conclusions. Methods commonly used to provide risk-adjusted metrics include multivariate logistic regression analysis, stratification, indirect standardization, and restrictions. The characteristics of recognized benchmarks worldwide, including the advantages and limitations are described. The choice of the right benchmark for the data from the Gulf Cooperation Council (GCC) states is challenging. The chosen benchmark should have similar data collection and presentation methods. Additionally, differences in surveillance environments including regulations should be taken into consideration when considering such a benchmark. The GCC center for infection control took some steps to unify HAI surveillance systems in the region. GCC hospitals still need to overcome legislative and logistic difficulties in sharing data to create their own benchmark. The availability of a regional GCC benchmark may better enable health care workers and researchers to obtain more accurate and realistic comparisons.

  8. PNNL Information Technology Benchmarking

    SciTech Connect

    DD Hostetler

    1999-09-08

    Benchmarking is a methodology for searching out industry best practices that lead to superior performance. It is exchanging information, not just with any organization, but with organizations known to be the best within PNNL, in industry, or in dissimilar industries with equivalent functions. It is used as a continuous improvement tool for business and technical processes, products, and services. Information technology--comprising all computer and electronic communication products and services--underpins the development and/or delivery of many PNNL products and services. This document describes the Pacific Northwest National Laboratory's (PNNL's) approach to information technology (IT) benchmarking. The purpose is to engage other organizations in the collaborative process of benchmarking in order to improve the value of IT services provided to customers. TM document's intended audience consists of other US Department of Energy (DOE) national laboratories and their IT staff. Although the individual participants must define the scope of collaborative benchmarking, an outline of IT service areas for possible benchmarking is described.

  9. Translational benchmark risk analysis

    PubMed Central

    Piegorsch, Walter W.

    2010-01-01

    Translational development – in the sense of translating a mature methodology from one area of application to another, evolving area – is discussed for the use of benchmark doses in quantitative risk assessment. Illustrations are presented with traditional applications of the benchmark paradigm in biology and toxicology, and also with risk endpoints that differ from traditional toxicological archetypes. It is seen that the benchmark approach can apply to a diverse spectrum of risk management settings. This suggests a promising future for this important risk-analytic tool. Extensions of the method to a wider variety of applications represent a significant opportunity for enhancing environmental, biomedical, industrial, and socio-economic risk assessments. PMID:20953283

  10. Harnessing person-generated health data to accelerate patient-centered outcomes research: the Crohn's and Colitis Foundation of America PCORnet Patient Powered Research Network (CCFA Partners).

    PubMed

    Chung, Arlene E; Sandler, Robert S; Long, Millie D; Ahrens, Sean; Burris, Jessica L; Martin, Christopher F; Anton, Kristen; Robb, Amber; Caruso, Thomas P; Jaeger, Elizabeth L; Chen, Wenli; Clark, Marshall; Myers, Kelly; Dobes, Angela; Kappelman, Michael D

    2016-05-01

    The Crohn's and Colitis Foundation of America Partners Patient-Powered Research Network (PPRN) seeks to advance and accelerate comparative effectiveness and translational research in inflammatory bowel diseases (IBDs). Our IBD-focused PCORnet PPRN has been designed to overcome the major obstacles that have limited patient-centered outcomes research in IBD by providing the technical infrastructure, patient governance, and patient-driven functionality needed to: 1) identify, prioritize, and undertake a patient-centered research agenda through sharing person-generated health data; 2) develop and test patient and provider-focused tools that utilize individual patient data to improve health behaviors and inform health care decisions and, ultimately, outcomes; and 3) rapidly disseminate new knowledge to patients, enabling them to improve their health. The Crohn's and Colitis Foundation of America Partners PPRN has fostered the development of a community of citizen scientists in IBD; created a portal that will recruit, retain, and engage members and encourage partnerships with external scientists; and produced an efficient infrastructure for identifying, screening, and contacting network members for participation in research.

  11. Benchmarking infrastructure for mutation text mining

    PubMed Central

    2014-01-01

    Background Experimental research on the automatic extraction of information about mutations from texts is greatly hindered by the lack of consensus evaluation infrastructure for the testing and benchmarking of mutation text mining systems. Results We propose a community-oriented annotation and benchmarking infrastructure to support development, testing, benchmarking, and comparison of mutation text mining systems. The design is based on semantic standards, where RDF is used to represent annotations, an OWL ontology provides an extensible schema for the data and SPARQL is used to compute various performance metrics, so that in many cases no programming is needed to analyze results from a text mining system. While large benchmark corpora for biological entity and relation extraction are focused mostly on genes, proteins, diseases, and species, our benchmarking infrastructure fills the gap for mutation information. The core infrastructure comprises (1) an ontology for modelling annotations, (2) SPARQL queries for computing performance metrics, and (3) a sizeable collection of manually curated documents, that can support mutation grounding and mutation impact extraction experiments. Conclusion We have developed the principal infrastructure for the benchmarking of mutation text mining tasks. The use of RDF and OWL as the representation for corpora ensures extensibility. The infrastructure is suitable for out-of-the-box use in several important scenarios and is ready, in its current state, for initial community adoption. PMID:24568600

  12. A framework for benchmarking land models

    SciTech Connect

    Luo, Yiqi; Randerson, J.; Abramowitz, G.; Bacour, C.; Blyth, E.; Carvalhais, N.; Ciais, Philippe; Dalmonech, D.; Fisher, J.B.; Fisher, R.; Friedlingstein, P.; Hibbard, Kathleen A.; Hoffman, F. M.; Huntzinger, Deborah; Jones, C.; Koven, C.; Lawrence, David M.; Li, D.J.; Mahecha, M.; Niu, S.L.; Norby, Richard J.; Piao, S.L.; Qi, X.; Peylin, P.; Prentice, I.C.; Riley, William; Reichstein, M.; Schwalm, C.; Wang, Y.; Xia, J. Y.; Zaehle, S.; Zhou, X. H.

    2012-10-09

    Land models, which have been developed by the modeling community in the past few decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land model performances and, meanwhile, highlights major challenges at this infant stage of benchmark analysis. The framework includes (1) targeted aspects of model performance to be evaluated, (2) a set of benchmarks as defined references to test model performance, (3) metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies, and (4) model improvement. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and land surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics in response to climate change across broad temporal and spatial scales. Thus, one major challenge is to select and define a limited number of benchmarks to effectively evaluate land model performance. The second challenge is to develop metrics of measuring mismatches between models and benchmarks. The metrics may include (1) a priori thresholds of acceptable model performance and (2) a scoring system to combine data–model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance to guide future development, thus enabling improved predictions of future states of ecosystems and climate. The near-future research effort should be on development of a set of widely acceptable benchmarks that can be used to objectively, effectively, and reliably evaluate fundamental properties of land models

  13. A framework for benchmarking land models

    SciTech Connect

    Luo, Yiqi; Randerson, James T.; Hoffman, Forrest; Norby, Richard J

    2012-01-01

    Land models, which have been developed by the modeling community in the past few decades to predict future states of ecosystems and climate, have to be critically evaluated for their performance skills of simulating ecosystem responses and feedback to climate change. Benchmarking is an emerging procedure to measure performance of models against a set of defined standards. This paper proposes a benchmarking framework for evaluation of land model performances and, meanwhile, highlights major challenges at this infant stage of benchmark analysis. The framework includes (1) targeted aspects of model performance to be evaluated, (2) a set of benchmarks as defined references to test model performance, (3) metrics to measure and compare performance skills among models so as to identify model strengths and deficiencies, and (4) model improvement. Land models are required to simulate exchange of water, energy, carbon and sometimes other trace gases between the atmosphere and land surface, and should be evaluated for their simulations of biophysical processes, biogeochemical cycles, and vegetation dynamics in response to climate change across broad temporal and spatial scales. Thus, one major challenge is to select and define a limited number of benchmarks to effectively evaluate land model performance. The second challenge is to develop metrics of measuring mismatches between models and benchmarks. The metrics may include (1) a priori thresholds of acceptable model performance and (2) a scoring system to combine data model mismatches for various processes at different temporal and spatial scales. The benchmark analyses should identify clues of weak model performance to guide future development, thus enabling improved predictions of future states of ecosystems and climate. The near-future research effort should be on development of a set of widely acceptable benchmarks that can be used to objectively, effectively, and reliably evaluate fundamental properties of land models

  14. Mid-Year Retention Indicators Report for Two-Year and Four-Year, Public and Private Institutions. Benchmark Research Study Conducted Fall 2010

    ERIC Educational Resources Information Center

    Noel-Levitz, Inc, 2011

    2011-01-01

    To assist campuses with accurately forecasting student retention, and to help with increasing it, this report identifies early indicators of students' progress toward completing a degree and establishes benchmarks that campuses can use to evaluate their performance. The report is based on a Web-based survey of college and university officials in…

  15. Building America Case Study: Ground Source Heat Pump Research, TaC Studios Residence, Atlanta, Georigia (Fact Sheet)

    SciTech Connect

    Not Available

    2014-09-01

    As part of the NAHB Research Center Industry Partnership, Southface partnered with TaC Studios, an Atlanta based architecture firm specializing in residential and light commercial design, on the construction of a new test home in Atlanta, GA in the mixed-humid climate. This home serves as a residence and home office for the firm's owners, as well as a demonstration of their design approach to potential and current clients. Southface believes the home demonstrates current best practices for the mixed-humid climate, including a building envelope featuring advanced air sealing details and low density spray foam insulation, glazing that exceeds ENERGY STAR requirements, and a high performance heating and cooling system. Construction quality and execution was a high priority for TaC Studios and was ensured by a third party review process. Post construction testing showed that the project met stated goals for envelope performance, an air infiltration rate of 2.15 ACH50. The homeowner's wished to further validate whole house energy savings through the project's involvement with Building America and this long-term monitoring effort. As a Building America test home, this home was evaluated to detail whole house energy use, end use loads, and the efficiency and operation of the ground source heat pump and associated systems. Given that the home includes many non-typical end use loads including a home office, pool, landscape water feature, and other luxury features not accounted for in Building America modeling tools, these end uses were separately monitored to determine their impact on overall energy consumption.

  16. Mask Waves Benchmark

    DTIC Science & Technology

    2007-10-01

    24 . Measured frequency vs. set frequency for all data .............................................. 23 25. Benchmark Probe#1 wave amplitude variation...4 8 A- 24 . Wave amplitude by probe, blower speed, lip setting for 0.768 Hz on the short I b an k...frequency and wavemaker bank .................................... 24 B- 1. Coefficient of variation as percentage for all conditions for long bank and bridge

  17. Benchmarks: WICHE Region 2012

    ERIC Educational Resources Information Center

    Western Interstate Commission for Higher Education, 2013

    2013-01-01

    Benchmarks: WICHE Region 2012 presents information on the West's progress in improving access to, success in, and financing of higher education. The information is updated annually to monitor change over time and encourage its use as a tool for informed discussion in policy and education communities. To establish a general context for the…

  18. Python/Lua Benchmarks

    SciTech Connect

    Busby, L.

    2014-08-01

    This is an adaptation of the pre-existing Scimark benchmark code to a variety of Python and Lua implementations. It also measures performance of the Fparser expression parser and C and C++ code on a variety of simple scientific expressions.

  19. Monte Carlo Benchmark

    SciTech Connect

    2010-10-20

    The "Monte Carlo Benchmark" (MCB) is intended to model the computatiional performance of Monte Carlo algorithms on parallel architectures. It models the solution of a simple heuristic transport equation using a Monte Carlo technique. The MCB employs typical features of Monte Carlo algorithms such as particle creation, particle tracking, tallying particle information, and particle destruction. Particles are also traded among processors using MPI calls.

  20. Benchmarking the World's Best

    ERIC Educational Resources Information Center

    Tucker, Marc S.

    2012-01-01

    A century ago, the United States was a world leader in industrial benchmarking. However, after World War II, once no one could compete with the U.S., it became complacent. Many industrialized countries now have higher student achievement and more equitable and efficient education systems. A higher proportion of young people in their workforces…

  1. HPCS HPCchallenge Benchmark Suite

    DTIC Science & Technology

    2007-11-02

    measured HPCchallenge Benchmark performance on various HPC architectures — from Cray X1s to Beowulf clusters — in the presentation and paper...from Cray X1s to Beowulf clusters — using the updated results at http://icl.cs.utk.edu/hpcc/hpcc_results.cgi Even a small percentage of random

  2. Striving for Excellence. The International Conference of the Learning Disabilities Association of America (Atlanta, Georgia, March 4-7, 1992). Research Poster Session Abstract. Volume 1.

    ERIC Educational Resources Information Center

    Russell, Steven C., Comp.

    Eleven abstracts of research projects related to individuals with learning disabilities are compiled in this booklet. The research projects were presented in poster sessions at the March 1992 International Conference of the Learning Disabilities Association of America. Titles and authors of poster sessions include: "Perceptual and Verbal Skills of…

  3. Genomic research, publics and experts in Latin America: Nation, race and body

    PubMed Central

    Wade, Peter; López-Beltrán, Carlos; Restrepo, Eduardo; Santos, Ricardo Ventura

    2015-01-01

    The articles in this issue highlight contributions that studies of Latin America can make to wider debates about the effects of genomic science on public ideas about race and nation. We argue that current ideas about the power of genomics to transfigure and transform existing ways of thinking about human diversity are often overstated. If a range of social contexts are examined, the effects are uneven. Our data show that genomic knowledge can unsettle and reinforce ideas of nation and race; it can be both banal and highly politicized. In this introduction, we outline concepts of genetic knowledge in society; theories of genetics, nation and race; approaches to public understandings of science; and the Latin American contexts of transnational ideas of nation and race. PMID:27479996

  4. Genomic research, publics and experts in Latin America: Nation, race and body.

    PubMed

    Wade, Peter; López-Beltrán, Carlos; Restrepo, Eduardo; Santos, Ricardo Ventura

    2015-12-01

    The articles in this issue highlight contributions that studies of Latin America can make to wider debates about the effects of genomic science on public ideas about race and nation. We argue that current ideas about the power of genomics to transfigure and transform existing ways of thinking about human diversity are often overstated. If a range of social contexts are examined, the effects are uneven. Our data show that genomic knowledge can unsettle and reinforce ideas of nation and race; it can be both banal and highly politicized. In this introduction, we outline concepts of genetic knowledge in society; theories of genetics, nation and race; approaches to public understandings of science; and the Latin American contexts of transnational ideas of nation and race.

  5. Los Alamos National Laboratory computer benchmarking 1982

    SciTech Connect

    Martin, J.L.

    1983-06-01

    Evaluating the performance of computing machinery is a continual effort of the Computer Research and Applications Group of the Los Alamos National Laboratory. This report summarizes the results of the group's benchmarking activities performed between October 1981 and September 1982, presenting compilation and execution times as well as megaflop rates for a set of benchmark codes. Tests were performed on the following computers: Cray Research, Inc. (CRI) Cray-1S; Control Data Corporation (CDC) 7600, 6600, Cyber 73, Cyber 825, Cyber 835, Cyber 855, and Cyber 205; Digital Equipment Corporation (DEC) VAX 11/780 and VAX 11/782; and Apollo Computer, Inc., Apollo.

  6. The NAS Parallel Benchmarks 2.1 Results

    NASA Technical Reports Server (NTRS)

    Saphir, William; Woo, Alex; Yarrow, Maurice

    1996-01-01

    We present performance results for version 2.1 of the NAS Parallel Benchmarks (NPB) on the following architectures: IBM SP2/66 MHz; SGI Power Challenge Array/90 MHz; Cray Research T3D; and Intel Paragon. The NAS Parallel Benchmarks are a widely-recognized suite of benchmarks originally designed to compare the performance of highly parallel computers with that of traditional supercomputers.

  7. Frito-Lay North America/NREL CRADA: Cooperative Research and Development Final Report, CRADA Number CRD-06-176

    SciTech Connect

    Walker, A.

    2013-06-01

    Frito Lay North America (FLNA) requires technical assistance for the evaluation and implementation of renewable energy and energy efficiency projects in production facilities and distribution centers across North America. Services provided by NREL do not compete with those available in the private sector, but rather provide FLNA with expertise to create opportunities for the private sector renewable/efficiency industries and to inform FLNA decision making regarding cost-effective projects. Services include: identifying the most cost-effective project locations based on renewable energy resource data, utility data, incentives and other parameters affecting projects; assistance with feasibility studies; procurement specifications; design reviews; and other services to support FNLA in improving resource efficiency at facilities. This Cooperative Research and Development Agreement (CRADA) establishes the terms and conditions under which FLNA may access capabilities unique to the laboratory and required by FLNA. Each subsequent task issued under this umbrella agreement would include a scope-of-work, budget, schedule, and provisions for intellectual property specific to that task.

  8. Principles for an ETL Benchmark

    NASA Astrophysics Data System (ADS)

    Wyatt, Len; Caufield, Brian; Pol, Daniel

    Conditions in the marketplace for ETL tools suggest that an industry standard benchmark is needed. The benchmark should provide useful data for comparing the performance of ETL systems, be based on a meaningful scenario, and be scalable over a wide range of data set sizes. This paper gives a general scoping of the proposed benchmark and outlines some key decision points. The Transaction Processing Performance Council (TPC) has formed a development subcommittee to define and produce such a benchmark.

  9. Curing America's Quick-Fix Mentality: A Role for Federally Supported Educational Research.

    ERIC Educational Resources Information Center

    Florio, David H.

    1983-01-01

    Demands for immediate answers to education's problems affect federal support for needed long-term educational research programs. The National Institute of Education has made significant progress toward establishing valid research methods and a solid foundation for future research efforts, though restricted in its activities and denied due credit.…

  10. Hospital Energy Benchmarking Guidance - Version 1.0

    SciTech Connect

    Singer, Brett C.

    2009-09-08

    This document describes an energy benchmarking framework for hospitals. The document is organized as follows. The introduction provides a brief primer on benchmarking and its application to hospitals. The next two sections discuss special considerations including the identification of normalizing factors. The presentation of metrics is preceded by a description of the overall framework and the rationale for the grouping of metrics. Following the presentation of metrics, a high-level protocol is provided. The next section presents draft benchmarks for some metrics; benchmarks are not available for many metrics owing to a lack of data. This document ends with a list of research needs for further development.

  11. Using benchmarks for radiation testing of microprocessors and FPGAs

    SciTech Connect

    Quinn, Heather; Robinson, William H.; Rech, Paolo; Aguirre, Miguel; Barnard, Arno; Desogus, Marco; Entrena, Luis; Garcia-Valderas, Mario; Guertin, Steven M.; Kaeli, David; Kastensmidt, Fernanda Lima; Kiddie, Bradley T.; Sanchez-Clemente, Antonio; Reorda, Matteo Sonza; Sterpone, Luca; Wirthlin, Michael

    2015-12-01

    Performance benchmarks have been used over the years to compare different systems. These benchmarks can be useful for researchers trying to determine how changes to the technology, architecture, or compiler affect the system's performance. No such standard exists for systems deployed into high radiation environments, making it difficult to assess whether changes in the fabrication process, circuitry, architecture, or software affect reliability or radiation sensitivity. In this paper, we propose a benchmark suite for high-reliability systems that is designed for field-programmable gate arrays and microprocessors. As a result, we describe the development process and report neutron test data for the hardware and software benchmarks.

  12. Sequoia Messaging Rate Benchmark

    SciTech Connect

    Friedley, Andrew

    2008-01-22

    The purpose of this benchmark is to measure the maximal message rate of a single compute node. The first num_cores ranks are expected to reside on the 'core' compute node for which message rate is being tested. After that, the next num_nbors ranks are neighbors for the first core rank, the next set of num_nbors ranks are neighbors for the second core rank, and so on. For example, testing an 8-core node (num_cores = 8) with 4 neighbors (num_nbors = 4) requires 8 + 8 * 4 - 40 ranks. The first 8 of those 40 ranks are expected to be on the 'core' node being benchmarked, while the rest of the ranks are on separate nodes.

  13. Algebraic Multigrid Benchmark

    SciTech Connect

    2013-05-06

    AMG2013 is a parallel algebraic multigrid solver for linear systems arising from problems on unstructured grids. It has been derived directly from the Boomer AMG solver in the hypre library, a large linear solvers library that is being developed in the Center for Applied Scientific Computing (CASC) at LLNL. The driver provided in the benchmark can build various test problems. The default problem is a Laplace type problem on an unstructured domain with various jumps and an anisotropy in one part.

  14. MPI Multicore Linktest Benchmark

    SciTech Connect

    Schulz, Martin

    2008-01-25

    The MPI Multicore Linktest (LinkTest) measures the aggregate bandwidth from/to a multicore node in a parallel system. It allows the user to specify a variety of different node layout and communication routine variations and reports the maximal observed bandwidth across all specified options. In particular, this benchmark is able to vary the number of tasks on the root node and thereby allows users to study the impact of multicore architectures on MPI communication performance.

  15. Practical Considerations when Using Benchmarking for Accountability in Higher Education

    ERIC Educational Resources Information Center

    Achtemeier, Sue D.; Simpson, Ronald D.

    2005-01-01

    The qualitative study on which this article is based examined key individuals' perceptions, both within a research university community and beyond in its external governing board, of how to improve benchmarking as an accountability method in higher education. Differing understanding of benchmarking revealed practical implications for using it as…

  16. Presidential Address 1997--Benchmarks for the Next Millennium.

    ERIC Educational Resources Information Center

    Baker, Pamela C.

    1997-01-01

    Reflects on the century's preeminent benchmarks, including the evolution in the lives of people with disabilities and the prevention of many causes of mental retardation. The ethical challenges of genetic engineering and diagnostic technology and the need for new benchmarks in policy, practice, and research are discussed. (CR)

  17. Integrating the Nqueens Algorithm into a Parameterized Benchmark Suite

    DTIC Science & Technology

    2016-02-01

    ARL-TR-7585 ● FEB 2016 US Army Research Laboratory Integrating the Nqueens Algorithm into a Parameterized Benchmark Suite by...the Nqueens Algorithm into a Parameterized Benchmark Suite by Jamie K Infantolino and Mikayla Malley Computational and Information Sciences...

  18. Educational Research in Latin America: From the Artisan to the Industrial Phase.

    ERIC Educational Resources Information Center

    Schiefelbein, Ernesto

    1990-01-01

    Analyzes Latin American educational research phases--the artisan and industrial. During the artisan phase (1950s-early 1970s), university scholars conducted isolated projects with limited access to information. Research centers, government supports, and information networks characterize the industrial period (post-mid 1970s). Argues information…

  19. Benchmark Problems for Spacecraft Formation Flying Missions

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell; Leitner, Jesse A.; Burns, Richard D.; Folta, David C.

    2003-01-01

    To provide high-level focus to distributed space system flight dynamics and control research, several benchmark problems are suggested. These problems are not specific to any current or proposed mission, but instead are intended to capture high-level features that would be generic to many similar missions.

  20. Benchmarking Image Matching for Surface Description

    NASA Astrophysics Data System (ADS)

    Haala, Norbert; Stößel, Wolfgang; Gruber, Michael; Pfeifer, Norbert; Fritsch, Dieter

    2013-04-01

    Semi Global Matching algorithms have encompassed a renaissance to process stereoscopic data sets for surface reconstructions. This method is capable to provide very dense point clouds with sampling distances close to the Ground Sampling Resolution (GSD) of aerial images. EuroSDR, the pan-European organization of Spatial Data Research has initiated a benchmark for dense image matching. The expected outcomes of this benchmark are assessments for suitability, quality measures for dense surface reconstructions and run-time aspects. In particular, aerial image blocks of two sites covering two types of landscapes (urban and rural) are analysed. The benchmark' participants provide their results with respect to several criteria. As a follow-up an overall evaluation is given. Finally, point clouds of rural and urban surfaces delivered by very dense image matching algorithms and software packages are presented and results are compared.

  1. Institutional Research in Emerging Countries of Southern Africa, Latin America, and the Middle East and North Africa: Global Frameworks and Local Practices

    ERIC Educational Resources Information Center

    Lange, Lis; Saavedra, F. Mauricio; Romano, Jeanine

    2013-01-01

    This chapter presents a synthesis of the conceptualization and practice of institutional research (IR) in higher education (HE) in emerging countries across Southern Africa, Latin America and the Middle East and North Africa (MENA) regions. The chapter contextualizes the growing need for IR in these regions, identifies problems and challenges…

  2. Summary of United States of America Treaty Verification Research and Development Program

    DTIC Science & Technology

    1993-02-01

    01-280-5500 Standard form 29R (Rev 2-89) rrrrc,, obd bv AN11Sid 19..1O 29un102 Blank 2 PREFACE The work described in this report was sponsored by the...developed and physical parameters required by the model are being measured for model compounds. Research in remote spectroscopic detection of chemical...directly coupled into the container wall has been developed and tested successfully. Research is ongoing on remote excitation and measurement. Only a

  3. Development and Evaluation of a Hematology-Oriented Clinical Research Training Program in Latin America.

    PubMed

    Sung, Lillian; Rego, Eduardo; Riva, Eloisa; Elwood, Jessica; Basso, Joe; Clayton, Charles P; Mikhael, Joseph

    2016-03-15

    The objectives of the study were to describe the development of a patient-oriented clinical research training program in a low- or middle-income country (LMIC) setting, to describe perceived benefits of the program and barriers to application, and to make recommendations for future training programs. The program was developed by the American Society of Hematology in collaboration with Latin American stakeholders and clinical researchers. Session types were didactic, small group, and one-on-one faculty/participant dyad formats. Outcomes were assessed by quantitative surveys of trainees and qualitative feedback from both trainees and faculty members. The program is an annual 2-day course specifically for Latin American hematologists. Through course evaluations, all trainees described that the didactic sessions were relevant. All session types were useful for gaining knowledge and skills, particularly one-on-one meetings. The potential for networking was highly valued. Barriers to trainee applications were the concerns that skill level, proposed research program, and knowledge of English were not sufficiently strong to warrant acceptance into the course, and financial costs of attendance. We have described the development and initial evaluation of a clinical research training program in a LMIC setting. We learned several valuable lessons that are applicable to other research training programs.

  4. [National health research systems in Latin America: a 14-country review].

    PubMed

    Alger, Jackeline; Becerra-Posada, Francisco; Kennedy, Andrew; Martinelli, Elena; Cuervo, Luis Gabriel

    2009-11-01

    This article discusses the main features of the national health research systems (NHRS) of Argentina, Bolivia, Brazil, Chile, Costa Rica, Cuba, Ecuador, El Salvador, Honduras, Panama, Paraguay, Peru, Uruguay, and Venezuela, based on documents prepared by their country experts who participated in the First Latin American Conference on Research and Innovation for Health held in April 2008, in Rio de Janeiro, Brazil. The review also includes sources cited in the reports, published scientific papers, and expert opinion, as well as regional secondary sources. Six countries reported having formal entities for health research governance and management: Brazil and Costa Rica's entities are led by their ministries of health; while Argentina, Cuba, Ecuador, and Venezuela have entities shared by their ministries of health and ministries of science and technology. Brazil and Ecuador each reported having a comprehensive national policy devoted specifically to health science, technology, and innovation. Argentina, Brazil, Costa Rica, Cuba, Ecuador, Panama, Paraguay, Peru, and Venezuela reported having established health research priorities. In conclusion, encouraging progress has been made, despite the structural and functional heterogeneity of the study countries' NHRS and their disparate levels of development. Instituting good NHRS governance/management is of utmost importance to how efficiently ministries of health, other government players, and society-at-large can tackle health research.

  5. Benchmarking: applications to transfusion medicine.

    PubMed

    Apelseth, Torunn Oveland; Molnar, Laura; Arnold, Emmy; Heddle, Nancy M

    2012-10-01

    Benchmarking is as a structured continuous collaborative process in which comparisons for selected indicators are used to identify factors that, when implemented, will improve transfusion practices. This study aimed to identify transfusion medicine studies reporting on benchmarking, summarize the benchmarking approaches used, and identify important considerations to move the concept of benchmarking forward in the field of transfusion medicine. A systematic review of published literature was performed to identify transfusion medicine-related studies that compared at least 2 separate institutions or regions with the intention of benchmarking focusing on 4 areas: blood utilization, safety, operational aspects, and blood donation. Forty-five studies were included: blood utilization (n = 35), safety (n = 5), operational aspects of transfusion medicine (n = 5), and blood donation (n = 0). Based on predefined criteria, 7 publications were classified as benchmarking, 2 as trending, and 36 as single-event studies. Three models of benchmarking are described: (1) a regional benchmarking program that collects and links relevant data from existing electronic sources, (2) a sentinel site model where data from a limited number of sites are collected, and (3) an institutional-initiated model where a site identifies indicators of interest and approaches other institutions. Benchmarking approaches are needed in the field of transfusion medicine. Major challenges include defining best practices and developing cost-effective methods of data collection. For those interested in initiating a benchmarking program, the sentinel site model may be most effective and sustainable as a starting point, although the regional model would be the ideal goal.

  6. Physiology, propaganda, and pound animals: medical research and animal welfare in mid-twentieth century America.

    PubMed

    Parascandola, John

    2007-07-01

    In 1952, the University of Michigan physiologist Robert Gesell shocked his colleagues at the business meeting of the American Physiological Society by reading a prepared statement in which he claimed that some of the animal experimentation being carried out by scientists was inhumane. He especially attacked the National Society for Medical Research (NSMR), an organization that had been founded to defend animal experimentation. This incident was part of a broader struggle taking place at the time between scientists and animal welfare advocates with respect to what restrictions, if any, should be placed on animal research. A particularly controversial issue was whether or not pound animals should be made available to laboratories for research. Two of the prominent players in this controversy were the NSMR and the Animal Welfare Institute, founded and run by Gesell's daughter, Christine Stevens. This article focuses on the interaction between these two organizations within the broader context of the debate over animal experimentation in the mid-twentieth century.

  7. Samuel Fernberger's rejected doctoral dissertation: a neglected resource for the history of ape research in America.

    PubMed

    Dewsbury, Donald A

    2009-02-01

    I summarize a never-completed 1911 doctoral dissertation on ape behavior by Samuel Fernberger of the University of Pennsylvania. Included are observations on many behavioral patterns including sensory and perceptual function, learning, memory, attention, imagination, personality, and emotion in an orangutan and two chimpanzees. There are examples of behavior resembling insight, conscience, tool use and imitation. Language comprehension was good but speech production was minimal. The document appears to contradict a brief published article on the project by William Furness in that punishment was frequently used. The document is important for understanding Fernberger's early career, for anticipations of later research, and for understanding the status of ape research at the time.

  8. An overview of seventy years of research (1944–2014) on toxoplasmosis in Colombia, South America

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This study reviews toxoplasmosis research in Colombia, beginning with the first report of Toxoplasma gondii infection in 1944. Here we summarize prevalence of T. gondii in humans and animals and associated correlates of infection, clinical spectrum of disease in humans, and genetic diversity of T. g...

  9. Educating America's Youth: What Makes a Difference. Child Trends Research Brief.

    ERIC Educational Resources Information Center

    Redd, Zakia; Brooks, Jennifer; McGarvey, Ayelish M.

    Because an educated workforce is recognized as essential to ensuring competitiveness in a global economy, it is considered an issue of national concern how teens in the United States are faring educationally, especially compared with teens worldwide. This research brief summarizes the key findings from a larger review of more than 300 research…

  10. Research on the Textbook Publishing Industry in the United States of America

    ERIC Educational Resources Information Center

    Watt, Michael G.

    2007-01-01

    The purpose of this article was to review published research literature about the publishing process and the roles of participants in this process on the textbook publishing industry in the USA. The contents of books, collected works, reports and journal articles were analysed, and summaries of the contents were then organised chronologically to…

  11. The America Society of Sugar Beet Technologist, advancing sugarbeet research for 75 years

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The American Society of Sugar Beet Technologists (ASSBT) was created 75 years ago when a group of researchers that had been meeting informally as the Sugarbeet Roundtable adopted the constitution and by-laws that provided the basis for an organization that continues to foster the exchange of ideas a...

  12. America Needs a New National Research and Development Center Focused on Adult Learning

    ERIC Educational Resources Information Center

    Comings, John

    2007-01-01

    Since 1991, the U.S. has had a national research and development (R&D) center focused on programs that help adults to improve their language, literacy, and numeracy skills; to acquire a General Educational Development (GED) or other high school certification; and to transition into postsecondary education or training. For the first five years,…

  13. The Mathematical Miseducation of America's Youth: Ignoring Research and Scientific Study in Education.

    ERIC Educational Resources Information Center

    Battista, Michael T.

    1999-01-01

    Because traditional instruction ignores students' personal construction of mathematical meaning, mathematical thought development is not properly nurtured. Several issues must be addressed, including adults' ignorance of math- and student-learning processes, identification of math-education research specialists, the myth of coverage, testing…

  14. Research on the Textbook Selection Process in the United States of America

    ERIC Educational Resources Information Center

    Watt, Michael G.

    2009-01-01

    The purpose of this article was to review published research literature about procedures used to select textbooks in the USA. The contents of books, collected works, reports and journal articles were analysed, and summaries of the contents were then organised chronologically to present a commentary on the topic. The results showed that procedures…

  15. Latin America and the Caribbean: A Survey of Distance Education 1991. New Papers on Higher Education: Studies and Research 5.

    ERIC Educational Resources Information Center

    Carty, Joan

    Country profiles compiled through a survey of distance education in Latin America and the Caribbean form the contents of this document. Seventeen countries were surveyed in Latin America: Argentina; Bolivia; Brazil; Chile; Colombia; Costa Rica; Ecuador; French Guiana; Guatemala; Guyana; Honduras; Mexico; Nicaragua; Panama; Peru; Uruguay; and…

  16. Research on Biodiversity and Climate Change at a Distance: Collaboration Networks between Europe and Latin America and the Caribbean.

    PubMed

    Dangles, Olivier; Loirat, Jean; Freour, Claire; Serre, Sandrine; Vacher, Jean; Le Roux, Xavier

    2016-01-01

    Biodiversity loss and climate change are both globally significant issues that must be addressed through collaboration across countries and disciplines. With the December 2015 COP21 climate conference in Paris and the recent creation of the Intergovernmental Platform on Biodiversity and Ecosystem Services (IPBES), it has become critical to evaluate the capacity for global research networks to develop at the interface between biodiversity and climate change. In the context of the European Union (EU) strategy to stand as a world leader in tackling global challenges, the European Commission has promoted ties between the EU and Latin America and the Caribbean (LAC) in science, technology and innovation. However, it is not clear how these significant interactions impact scientific cooperation at the interface of biodiversity and climate change. We looked at research collaborations between two major regions-the European Research Area (ERA) and LAC-that addressed both biodiversity and climate change. We analysed the temporal evolution of these collaborations, whether they were led by ERA or LAC teams, and which research domains they covered. We surveyed publications listed on the Web of Science that were authored by researchers from both the ERA and LAC and that were published between 2003 and 2013. We also run similar analyses on other topics and other continents to provide baseline comparisons. Our results revealed a steady increase in scientific co-authorships between ERA and LAC countries as a result of the increasingly complex web of relationships that has been weaved among scientists from the two regions. The ERA-LAC co-authorship increase for biodiversity and climate change was higher than those reported for other topics and for collaboration with other continents. We also found strong differences in international collaboration patterns within the LAC: co-publications were fewest from researchers in low- and lower-middle-income countries and most prevalent from

  17. Research on Biodiversity and Climate Change at a Distance: Collaboration Networks between Europe and Latin America and the Caribbean

    PubMed Central

    Dangles, Olivier; Loirat, Jean; Freour, Claire; Serre, Sandrine; Vacher, Jean; Le Roux, Xavier

    2016-01-01

    Biodiversity loss and climate change are both globally significant issues that must be addressed through collaboration across countries and disciplines. With the December 2015 COP21 climate conference in Paris and the recent creation of the Intergovernmental Platform on Biodiversity and Ecosystem Services (IPBES), it has become critical to evaluate the capacity for global research networks to develop at the interface between biodiversity and climate change. In the context of the European Union (EU) strategy to stand as a world leader in tackling global challenges, the European Commission has promoted ties between the EU and Latin America and the Caribbean (LAC) in science, technology and innovation. However, it is not clear how these significant interactions impact scientific cooperation at the interface of biodiversity and climate change. We looked at research collaborations between two major regions—the European Research Area (ERA) and LAC—that addressed both biodiversity and climate change. We analysed the temporal evolution of these collaborations, whether they were led by ERA or LAC teams, and which research domains they covered. We surveyed publications listed on the Web of Science that were authored by researchers from both the ERA and LAC and that were published between 2003 and 2013. We also run similar analyses on other topics and other continents to provide baseline comparisons. Our results revealed a steady increase in scientific co-authorships between ERA and LAC countries as a result of the increasingly complex web of relationships that has been weaved among scientists from the two regions. The ERA-LAC co-authorship increase for biodiversity and climate change was higher than those reported for other topics and for collaboration with other continents. We also found strong differences in international collaboration patterns within the LAC: co-publications were fewest from researchers in low- and lower-middle-income countries and most prevalent from

  18. New opportunities offered by Cubesats for space research in Latin America: The SUCHAI project case

    NASA Astrophysics Data System (ADS)

    Diaz, M. A.; Zagal, J. C.; Falcon, C.; Stepanova, M.; Valdivia, J. A.; Martinez-Ledesma, M.; Diaz-Peña, J.; Jaramillo, F. R.; Romanova, N.; Pacheco, E.; Milla, M.; Orchard, M.; Silva, J.; Mena, F. P.

    2016-11-01

    During the last decade, a very small-standardized satellite, the Cubesat, emerged as a low-cost fast-development tool for space and technology research. Although its genesis is related to education, the change in paradigm presented by this satellite platform has motivated several countries, institutions, and companies to invest in a variety of technologies, aimed at improving Cubesat capabilities, while lowering costs of space missions. Following that trend, Latin American institutions, mostly universities, has started to develop Cubesat missions. This article describes some of the Latin American projects in this area. In particular, we discuss the achievements and scientific grounds upon which the first Cubesat projects in Chile were based and the implications that those projects have had on pursuing satellite-based research in the country and in collaboration with other countries of the region.

  19. Priming the Innovation Pump: America Needs More Scientists, Engineers, and Basic Research

    DTIC Science & Technology

    2011-01-01

    students through its Science, Mathematics, and Research for Transforma- tion ( SMART ) program. SMART funds U.S. S&E students’ education costs in exchange...slide 5). Through its Engineers in the Classroom program, LM is building school partnerships to create a pipeline of future S&E employees. From high... Classroom need to expand in size and numbers, because it can take 22–25 years to grow an experienced engineer from entry-level talent. Meanwhile, the

  20. Core Benchmarks Descriptions

    SciTech Connect

    Pavlovichev, A.M.

    2001-05-24

    Actual regulations while designing of new fuel cycles for nuclear power installations comprise a calculational justification to be performed by certified computer codes. It guarantees that obtained calculational results will be within the limits of declared uncertainties that are indicated in a certificate issued by Gosatomnadzor of Russian Federation (GAN) and concerning a corresponding computer code. A formal justification of declared uncertainties is the comparison of calculational results obtained by a commercial code with the results of experiments or of calculational tests that are calculated with an uncertainty defined by certified precision codes of MCU type or of other one. The actual level of international cooperation provides an enlarging of the bank of experimental and calculational benchmarks acceptable for a certification of commercial codes that are being used for a design of fuel loadings with MOX fuel. In particular, the work is practically finished on the forming of calculational benchmarks list for a certification of code TVS-M as applied to MOX fuel assembly calculations. The results on these activities are presented.

  1. Benchmarking concentrating photovoltaic systems

    NASA Astrophysics Data System (ADS)

    Duerr, Fabian; Muthirayan, Buvaneshwari; Meuret, Youri; Thienpont, Hugo

    2010-08-01

    Integral to photovoltaics is the need to provide improved economic viability. To achieve this goal, photovoltaic technology has to be able to harness more light at less cost. A large variety of concentrating photovoltaic concepts has provided cause for pursuit. To obtain a detailed profitability analysis, a flexible evaluation is crucial for benchmarking the cost-performance of this variety of concentrating photovoltaic concepts. To save time and capital, a way to estimate the cost-performance of a complete solar energy system is to use computer aided modeling. In this work a benchmark tool is introduced based on a modular programming concept. The overall implementation is done in MATLAB whereas Advanced Systems Analysis Program (ASAP) is used for ray tracing calculations. This allows for a flexible and extendable structuring of all important modules, namely an advanced source modeling including time and local dependence, and an advanced optical system analysis of various optical designs to obtain an evaluation of the figure of merit. An important figure of merit: the energy yield for a given photovoltaic system at a geographical position over a specific period, can be calculated.

  2. Ecology and biology of paddlefish in North America: historical perspectives, management approaches, and research priorities

    USGS Publications Warehouse

    Jennings, Cecil A.; Zigler, Stephen J.

    2000-01-01

    Paddlefish (Polyodon spathula, Polyodontidae) are large, mostly-riverine fish that once were abundant in medium- to large-sized river systems throughout much of the central United States. Concern for paddlefish populations has grown from a regional fisheries issue to one of national importance for the United States. In 1989, the U.S. Fish and Wildlife Service (USFWS) was petitioned to list paddlefish as a federally threatened species under the Endangered Species Act. The petition was not granted, primarily because of a lack of empirical data on paddlefish population size, age structure, growth, or harvest rates across the present 22-state range. Nonetheless, concern for paddlefish populations prompted the USFWS to recommend that paddlefish be protected through the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES). The addition of paddlefish to Appendix II of CITES, which was approved in March 1992, provides a mechanism to curtail illegal trade in paddlefish and their parts and supports a variety of conservation plans. Paddlefish populations have been negatively affected by overharvest, river modifications, and pollution, but the paddlefish still occupies much of its historic range and most extant populations seem to be stable. Although many facets of paddlefish biology and ecology are well understood, the lack of information on larval and juvenile ecology, mechanisms that determine recruitment, population size and vital rates, interjurisdictional movements, and the effects of anthropogenic activities present significant obstacles for managing paddlefish populations. Questions about the size and structure of local populations, and how such populations are affected by navigation traffic, dams, and pollution are regarded as medium priority areas for future research. The availability of suitable spawning habitat and overall reproductive success in impounded rivers are unknown and represent critical areas for future research

  3. The Applied Meteorology Unit: Nineteen Years Successfully Transitioning Research into Operations for America's Space Program

    NASA Technical Reports Server (NTRS)

    Madura, John T.; Bauman, William H.; Merceret, Francis J.; Roeder, William P.; Brody, Frank C.; Hagemeyer, Bartlett C.

    2010-01-01

    The Applied Meteorology Unit (AMU) provides technology transition and technique development to improve operational weather support to the Space Shuttle and the entire American space program. The AMU is funded and managed by NASA and operated by a contractor that provides five meteorologists with a diverse mix of advanced degrees, operational experience, and associated skills including data processing, statistics, and the development of graphical user interfaces. The AMU's primary customers are the U.S. Air Force 45th Weather Squadron at Patrick Air Force Base, the National Weather Service Spaceflight Meteorology Group at NASA Johnson Space Center, and the National Weather Service Melbourne FL Forecast Office. The AMU has transitioned research into operations for nineteen years and worked on a wide range of topics, including new forecasting techniques for lightning probability, synoptic peak winds,.convective winds, and summer severe weather; satellite tools to predict anvil cloud trajectories and evaluate camera line of sight for Space Shuttle launch; optimized radar scan strategies; evaluated and implemented local numerical models; evaluated weather sensors; and many more. The AMU has completed 113 projects with 5 more scheduled to be completed by the end of 2010. During this rich history, the AMU and its customers have learned many lessons on how to effectively transition research into operations. Some of these lessons learned include collocating with the operational customer and periodically visiting geographically separated customers, operator submitted projects, consensus tasking process, use of operator primary advocates for each project, customer AMU liaisons with experience in both operations and research, flexibility in adapting the project plan based on lessons learned during the project, and incorporating training and other transition assistance into the project plans. Operator involvement has been critical to the AMU's remarkable success and many awards

  4. Meeting report: the Schizophrenia International Research Society (SIRS) South America Conference (August 5-7, 2011).

    PubMed

    Massuda, Raffael; Chaves, Cristiano; Trzesniak, Clarissa; Machado-de-Sousa, Joao P; Zanetti, Marcus V; Murray, Robin M; Gattaz, Wagner F; Busatto, Geraldo F

    2012-05-01

    On August 5-7, 2011, São Paulo was home to the first regional meeting of the Schizophrenia International Research Society (SIRS). Over 400 people from many countries attended the activities and contributed with around 200 submissions for oral and poster presentations. This article summarizes the data presented during the meeting, with an emphasis on the plenary talks and sessions for short oral presentations. For information on the poster presentations, readers are referred to the special issue of Revista de Psiquiatria Clínica (Brazil) dedicated to the conference (available at: http://www.hcnet.usp.br/ipq/revista/vol38/s1/).

  5. I too, am America: a review of research on systemic lupus erythematosus in African-Americans

    PubMed Central

    Williams, Edith M; Bruner, Larisa; Adkins, Alyssa; Vrana, Caroline; Logan, Ayaba; Kamen, Diane; Oates, James C

    2016-01-01

    Systemic lupus erythematosus (SLE) is a multi-organ autoimmune disorder that can cause significant morbidity and mortality. A large body of evidence has shown that African-Americans experience the disease more severely than other racial-ethnic groups. Relevant literature for the years 2000 to August 2015 were obtained from systematic searches of PubMed, Scopus, and the EBSCOHost platform that includes MEDLINE, CINAHL, etc. to evaluate research focused on SLE in African-Americans. Thirty-six of the 1502 articles were classified according to their level of evidence. The systematic review of the literature reported a wide range of adverse outcomes in African-American SLE patients and risk factors observed in other mono and multi-ethnic investigations. Studies limited to African-Americans with SLE identified novel methods for more precise ascertainment of risk and observed novel findings that hadn't been previously reported in African-Americans with SLE. Both environmental and genetic studies included in this review have highlighted unique African-American populations in an attempt to isolate risk attributable to African ancestry and observed increased genetic influence on overall disease in this cohort. The review also revealed emerging research in areas of quality of life, race-tailored interventions, and self-management. This review reemphasizes the importance of additional studies to better elucidate the natural history of SLE in African-Americans and optimize therapeutic strategies for those who are identified as being at high risk. PMID:27651918

  6. I too, am America: a review of research on systemic lupus erythematosus in African-Americans.

    PubMed

    Williams, Edith M; Bruner, Larisa; Adkins, Alyssa; Vrana, Caroline; Logan, Ayaba; Kamen, Diane; Oates, James C

    2016-01-01

    Systemic lupus erythematosus (SLE) is a multi-organ autoimmune disorder that can cause significant morbidity and mortality. A large body of evidence has shown that African-Americans experience the disease more severely than other racial-ethnic groups. Relevant literature for the years 2000 to August 2015 were obtained from systematic searches of PubMed, Scopus, and the EBSCOHost platform that includes MEDLINE, CINAHL, etc. to evaluate research focused on SLE in African-Americans. Thirty-six of the 1502 articles were classified according to their level of evidence. The systematic review of the literature reported a wide range of adverse outcomes in African-American SLE patients and risk factors observed in other mono and multi-ethnic investigations. Studies limited to African-Americans with SLE identified novel methods for more precise ascertainment of risk and observed novel findings that hadn't been previously reported in African-Americans with SLE. Both environmental and genetic studies included in this review have highlighted unique African-American populations in an attempt to isolate risk attributable to African ancestry and observed increased genetic influence on overall disease in this cohort. The review also revealed emerging research in areas of quality of life, race-tailored interventions, and self-management. This review reemphasizes the importance of additional studies to better elucidate the natural history of SLE in African-Americans and optimize therapeutic strategies for those who are identified as being at high risk.

  7. Sources of Inequities in Rural America: Implications for Rural Community Development and Research. Community Development Research Series.

    ERIC Educational Resources Information Center

    Fujimoto, Isao; Zone, Martin

    As part of a series prepared to acquaint small community officials with information on the latest community related research findings at the University of California at Davis, this monograph explicates the way in which tax structure, rural development assumptions, and even rural development policies and subsidies contribute to the inequities found…

  8. Hyperspectral Remote Sensing and Ecological Modeling Research and Education at Mid America Remote Sensing Center (MARC): Field and Laboratory Enhancement

    NASA Technical Reports Server (NTRS)

    Cetin, Haluk

    1999-01-01

    The purpose of this project was to establish a new hyperspectral remote sensing laboratory at the Mid-America Remote sensing Center (MARC), dedicated to in situ and laboratory measurements of environmental samples and to the manipulation, analysis, and storage of remotely sensed data for environmental monitoring and research in ecological modeling using hyperspectral remote sensing at MARC, one of three research facilities of the Center of Reservoir Research at Murray State University (MSU), a Kentucky Commonwealth Center of Excellence. The equipment purchased, a FieldSpec FR portable spectroradiometer and peripherals, and ENVI hyperspectral data processing software, allowed MARC to provide hands-on experience, education, and training for the students of the Department of Geosciences in quantitative remote sensing using hyperspectral data, Geographic Information System (GIS), digital image processing (DIP), computer, geological and geophysical mapping; to provide field support to the researchers and students collecting in situ and laboratory measurements of environmental data; to create a spectral library of the cover types and to establish a World Wide Web server to provide the spectral library to other academic, state and Federal institutions. Much of the research will soon be published in scientific journals. A World Wide Web page has been created at the web site of MARC. Results of this project are grouped in two categories, education and research accomplishments. The Principal Investigator (PI) modified remote sensing and DIP courses to introduce students to ii situ field spectra and laboratory remote sensing studies for environmental monitoring in the region by using the new equipment in the courses. The PI collected in situ measurements using the spectroradiometer for the ER-2 mission to Puerto Rico project for the Moderate Resolution Imaging Spectrometer (MODIS) Airborne Simulator (MAS). Currently MARC is mapping water quality in Kentucky Lake and

  9. Cleanroom energy benchmarking results

    SciTech Connect

    Tschudi, William; Xu, Tengfang

    2001-09-01

    A utility market transformation project studied energy use and identified energy efficiency opportunities in cleanroom HVAC design and operation for fourteen cleanrooms. This paper presents the results of this work and relevant observations. Cleanroom owners and operators know that cleanrooms are energy intensive but have little information to compare their cleanroom's performance over time, or to others. Direct comparison of energy performance by traditional means, such as watts/ft{sup 2}, is not a good indicator with the wide range of industrial processes and cleanliness levels occurring in cleanrooms. In this project, metrics allow direct comparison of the efficiency of HVAC systems and components. Energy and flow measurements were taken to determine actual HVAC system energy efficiency. The results confirm a wide variation in operating efficiency and they identify other non-energy operating problems. Improvement opportunities were identified at each of the benchmarked facilities. Analysis of the best performing systems and components is summarized, as are areas for additional investigation.

  10. Benchmarking monthly homogenization algorithms

    NASA Astrophysics Data System (ADS)

    Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.

    2011-08-01

    The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data

  11. Green utilities for research and eco-tourist communities, Rio Bravo, Belize, Central America

    SciTech Connect

    Jackson, O.

    1997-12-31

    Programme for Belize (PFB), a non-governmental organization which owns and manages the Rio Bravo Conservation and Management Area (RBCMA), a 229,000 acre section of subtropical rainforest in northwestern Belize, is developing a series of research and eco-tourism developments as sustainable development projects. Guided by a comprehensive Sustainable Infrastructure Plan completed by Caribbean Infra-Tech, Inc. (CIT) in 1995, PFB adopted an organizational goal of implementing 100% green renewable energy-based utilities for their two major development sites: La Milpa and Hill Bank stations. To date, PFB has constructed or installed over 20 kW of standalone PV power, sustainable water supply systems, recycling waste treatment systems, and a model sustainable Dormitory and Bath House facility in the RBCMA. In addition, a Resource Conservation and Management Program (RCMP), which is to guide ongoing visitor orientation, staff training, and sustainable systems operations and maintenance, is now being prepared for immediate implementation. In this paper, the design and technical performance of the solar (PV) electric power plants, PV water pumping, solar water heating and other green utility systems will be assessed.

  12. Investigation of comparative effectiveness research in Asia, Europe, and North America

    PubMed Central

    Patel, Isha; Rarus, Rachel; Tan, Xi; Lee, EK; Guy, Jason; Ahmad, Akram; Chang, Jongwha

    2015-01-01

    Comparative effectiveness research (CER) is an important branch of pharmacoeconomics that systematically studies and evaluates the cost-effectiveness of medical interventions. CER plays instrumental roles in guiding government public health policy programs and insurance. Countries throughout the world use different methods of CER to help make medical decisions based on providing optimal therapy at a reduced cost. Expenses to the healthcare system continue to rise, and CER is one-way in which expenses could be curbed in the future by applying cost-effectiveness evidence to clinical decisions. China, India, South Korea, and the United Kingdom are of essential focus because these country's economies and health care expenses continue to expand. The structures and use of CER are diverse throughout these countries, and each is of prime importance. By conducting this thorough comparison of CER in different nations, strategies and organizational setups from different countries can be applied to help guide public health and medical decision-making in order to continue to expand the establishment and role of CER programs. The patient-centered medical home has been created to help reduce costs in the primary care sector and to help improve the effectiveness of therapy. Barriers to CER are also important as many stakeholders need to be able to work together to provide the best CER evidence. The advancement of CER in multiple countries throughout the world provides a possible way of reducing costs to the healthcare system in an age of expanding expenses. PMID:26729947

  13. Benchmark problems for numerical implementations of phase field models

    SciTech Connect

    Jokisaari, A. M.; Voorhees, P. W.; Guyer, J. E.; Warren, J.; Heinonen, O. G.

    2016-10-01

    Here, we present the first set of benchmark problems for phase field models that are being developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST). While many scientific research areas use a limited set of well-established software, the growing phase field community continues to develop a wide variety of codes and lacks benchmark problems to consistently evaluate the numerical performance of new implementations. Phase field modeling has become significantly more popular as computational power has increased and is now becoming mainstream, driving the need for benchmark problems to validate and verify new implementations. We follow the example set by the micromagnetics community to develop an evolving set of benchmark problems that test the usability, computational resources, numerical capabilities and physical scope of phase field simulation codes. In this paper, we propose two benchmark problems that cover the physics of solute diffusion and growth and coarsening of a second phase via a simple spinodal decomposition model and a more complex Ostwald ripening model. We demonstrate the utility of benchmark problems by comparing the results of simulations performed with two different adaptive time stepping techniques, and we discuss the needs of future benchmark problems. The development of benchmark problems will enable the results of quantitative phase field models to be confidently incorporated into integrated computational materials science and engineering (ICME), an important goal of the Materials Genome Initiative.

  14. Benchmark problems for numerical implementations of phase field models

    DOE PAGES

    Jokisaari, A. M.; Voorhees, P. W.; Guyer, J. E.; ...

    2016-10-01

    Here, we present the first set of benchmark problems for phase field models that are being developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST). While many scientific research areas use a limited set of well-established software, the growing phase field community continues to develop a wide variety of codes and lacks benchmark problems to consistently evaluate the numerical performance of new implementations. Phase field modeling has become significantly more popular as computational power has increased and is now becoming mainstream, driving the need for benchmark problems to validate and verifymore » new implementations. We follow the example set by the micromagnetics community to develop an evolving set of benchmark problems that test the usability, computational resources, numerical capabilities and physical scope of phase field simulation codes. In this paper, we propose two benchmark problems that cover the physics of solute diffusion and growth and coarsening of a second phase via a simple spinodal decomposition model and a more complex Ostwald ripening model. We demonstrate the utility of benchmark problems by comparing the results of simulations performed with two different adaptive time stepping techniques, and we discuss the needs of future benchmark problems. The development of benchmark problems will enable the results of quantitative phase field models to be confidently incorporated into integrated computational materials science and engineering (ICME), an important goal of the Materials Genome Initiative.« less

  15. To Boldly Go. America's Next Era in Space: New Frontiers in Climate Research

    NASA Technical Reports Server (NTRS)

    1995-01-01

    Dr. France Cordova, NASA's Chief Scientist, chaired this, the fourth seminar in the NASA Administrator's Seminar Series. She introduced NASA Administrator, Daniel S. Goldin, who greeted the attendees, and in his opening remarks said that human beings have a need to understand the what and why of the forces of nature and of people, and the stresses on the planet Earth. The first speaker, Dr. Ellen Mosley-Thompson of Ohio State University discussed the many things that scientists have learned from ice cores obtained in Peru and the Antarctic. The next speaker, Dr. Michael McElroy of Harvard University, is active in environmental research. He noted that insurance companies need to know more about the physics and chemistry of weather in order to avoid bankruptcy; that the greenhouse effect, which is good because it reflects heat, is being changed, and we don't know the rules. In the discussion that followed, Goldin asked if the present technology for measuring circulation of air and water and contents of the atmosphere is worth the cost. Drs. McElroy and Mosley-Thompson noted that the historic record in an ice core is endangered by ice melts; that in the last 10 years we've learned that tropics change; that the water vapor in the tropics is critical right now; that clouds absorb short-wave radiation; and that there is a need to improve measurements of atmospheric contents, the development of models, and the understanding of basic physics. We also need to understand parameters for detecting climate change, water, water temperature, and be able to provide fundamental information. Additional information is included in the original extended abstract.

  16. Building America Best Practices Series Volume 15: 40% Whole-House Energy Savings in the Hot-Humid Climate

    SciTech Connect

    Baechler, Michael C.; Gilbride, Theresa L.; Hefty, Marye G.; Cole, Pamala C.; Adams, Karen; Noonan, Christine F.

    2011-09-01

    This best practices guide is the 15th in a series of guides for builders produced by PNNL for the U.S. Department of Energy’s Building America program. This guide book is a resource to help builders design and construct homes that are among the most energy-efficient available, while addressing issues such as building durability, indoor air quality, and occupant health, safety, and comfort. With the measures described in this guide, builders in the hot-humid climate can build homes that have whole-house energy savings of 40% over the Building America benchmark with no added overall costs for consumers. The best practices described in this document are based on the results of research and demonstration projects conducted by Building America’s research teams. Building America brings together the nation’s leading building scientists with over 300 production builders to develop, test, and apply innovative, energy-efficient construction practices. Building America builders have found they can build homes that meet these aggressive energy-efficiency goals at no net increased costs to the homeowners. Currently, Building America homes achieve energy savings of 40% greater than the Building America benchmark home (a home built to mid-1990s building practices roughly equivalent to the 1993 Model Energy Code). The recommendations in this document meet or exceed the requirements of the 2009 IECC and 2009 IRC and those requirements are highlighted in the text. Requirements of the 2012 IECC and 2012 IRC are also noted in text and tables throughout the guide. This document will be distributed via the DOE Building America website: www.buildingamerica.gov.

  17. Building America Best Practices Series Volume 16: 40% Whole-House Energy Savings in the Mixed-Humid Climate

    SciTech Connect

    Baechler, Michael C.; Gilbride, Theresa L.; Hefty, Marye G.; Cole, Pamala C.; Adams, Karen; Butner, Ryan S.; Ortiz, Sallie J.

    2011-09-01

    This best practices guide is the 16th in a series of guides for builders produced by PNNL for the U.S. Department of Energy’s Building America program. This guide book is a resource to help builders design and construct homes that are among the most energy-efficient available, while addressing issues such as building durability, indoor air quality, and occupant health, safety, and comfort. With the measures described in this guide, builders in the mixed-humid climate can build homes that have whole-house energy savings of 40% over the Building America benchmark with no added overall costs for consumers. The best practices described in this document are based on the results of research and demonstration projects conducted by Building America’s research teams. Building America brings together the nation’s leading building scientists with over 300 production builders to develop, test, and apply innovative, energy-efficient construction practices. Building America builders have found they can build homes that meet these aggressive energy-efficiency goals at no net increased costs to the homeowners. Currently, Building America homes achieve energy savings of 40% greater than the Building America benchmark home (a home built to mid-1990s building practices roughly equivalent to the 1993 Model Energy Code). The recommendations in this document meet or exceed the requirements of the 2009 IECC and 2009 IRC and those requirements are highlighted in the text. Requirements of the 2012 IECC and 2012 IRC are also noted in text and tables throughout the guide. This document will be distributed via the DOE Building America website: www.buildingamerica.gov.

  18. Challenges and Benchmarks in Bioimage Analysis.

    PubMed

    Kozubek, Michal

    2016-01-01

    Similar to the medical imaging community, the bioimaging community has recently realized the need to benchmark various image analysis methods to compare their performance and assess their suitability for specific applications. Challenges sponsored by prestigious conferences have proven to be an effective means of encouraging benchmarking and new algorithm development for a particular type of image data. Bioimage analysis challenges have recently complemented medical image analysis challenges, especially in the case of the International Symposium on Biomedical Imaging (ISBI). This review summarizes recent progress in this respect and describes the general process of designing a bioimage analysis benchmark or challenge, including the proper selection of datasets and evaluation metrics. It also presents examples of specific target applications and biological research tasks that have benefited from these challenges with respect to the performance of automatic image analysis methods that are crucial for the given task. Finally, available benchmarks and challenges in terms of common features, possible classification and implications drawn from the results are analysed.

  19. Internal Benchmarking for Institutional Effectiveness

    ERIC Educational Resources Information Center

    Ronco, Sharron L.

    2012-01-01

    Internal benchmarking is an established practice in business and industry for identifying best in-house practices and disseminating the knowledge about those practices to other groups in the organization. Internal benchmarking can be done with structures, processes, outcomes, or even individuals. In colleges or universities with multicampuses or a…

  20. Benchmarking: A Process for Improvement.

    ERIC Educational Resources Information Center

    Peischl, Thomas M.

    One problem with the outcome-based measures used in higher education is that they measure quantity but not quality. Benchmarking, or the use of some external standard of quality to measure tasks, processes, and outputs, is partially solving that difficulty. Benchmarking allows for the establishment of a systematic process to indicate if outputs…

  1. FireHose Streaming Benchmarks

    SciTech Connect

    Karl Anderson, Steve Plimpton

    2015-01-27

    The FireHose Streaming Benchmarks are a suite of stream-processing benchmarks defined to enable comparison of streaming software and hardware, both quantitatively vis-a-vis the rate at which they can process data, and qualitatively by judging the effort involved to implement and run the benchmarks. Each benchmark has two parts. The first is a generator which produces and outputs datums at a high rate in a specific format. The second is an analytic which reads the stream of datums and is required to perform a well-defined calculation on the collection of datums, typically to find anomalous datums that have been created in the stream by the generator. The FireHose suite provides code for the generators, sample code for the analytics (which users are free to re-implement in their own custom frameworks), and a precise definition of each benchmark calculation.

  2. Applications of Integral Benchmark Data

    SciTech Connect

    Giuseppe Palmiotti; Teruhiko Kugo; Fitz Trumble; Albert C. Kahler; Dale Lancaster

    2014-10-09

    The International Reactor Physics Experiment Evaluation Project (IRPhEP) and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) provide evaluated integral benchmark data that may be used for validation of reactor physics / nuclear criticality safety analytical methods and data, nuclear data testing, advanced modeling and simulation, and safety analysis licensing activities. The handbooks produced by these programs are used in over 30 countries. Five example applications are presented in this paper: (1) Use of IRPhEP Data in Uncertainty Analyses and Cross Section Adjustment, (2) Uncertainty Evaluation Methods for Reactor Core Design at JAEA Using Reactor Physics Experimental Data, (3) Application of Benchmarking Data to a Broad Range of Criticality Safety Problems, (4) Cross Section Data Testing with ICSBEP Benchmarks, and (5) Use of the International Handbook of Evaluated Reactor Physics Benchmark Experiments to Support the Power Industry.

  3. Update on the American College of Rheumatology/Spondyloarthritis Research and Treatment Network/Spondylitis Association of America axial spondyloarhtritis treatment guidelines project.

    PubMed

    Ward, Michael M

    2014-06-01

    The American College of Rheumatology, the Spondyloarthritis Research and Treatment Network, and the Spondylitis Association of America have begun collaborating on a project to develop treatment guidelines for axial spondyloarthritis. The project will use the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) method, which is based on systematic literature reviews and quantitative evidence summaries, to develop treatment recommendations for the use of pharmacological interventions, rehabilitation, surgery, preventive care, and disease monitoring in patients with ankylosing spondylitis and axial spondyloarthritis.

  4. UPDATE ON THE AMERICAN COLLEGE OF RHEUMATOLOGY/SPONDYLOARTHRITIS RESEARCH AND TREATMENT NETWORK/SPONDYLITIS ASSOCIATION OF AMERICA AXIAL SPONDYLOARHTRITIS TREATMENT GUIDELINES PROJECT

    PubMed Central

    Ward, Michael M.

    2014-01-01

    The American College of Rheumatology, the Spondyloarthritis Research and Treatment Network, and the Spondylitis Association of America have begun collaborating on a project to develop treatment guidelines for axial spondyloarthritis. The project will use the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) method, which is based on systematic literature reviews and quantitative evidence summaries, to develop treatment recommendations for the use of pharmacological interventions, rehabilitation, surgery, preventive care, and disease monitoring in patients with ankylosing spondylitis and axial spondyloarthritis. PMID:24810702

  5. Aeroelasticity Benchmark Assessment: Subsonic Fixed Wing Program

    NASA Technical Reports Server (NTRS)

    Florance, Jennifer P.; Chwalowski, Pawel; Wieseman, Carol D.

    2010-01-01

    The fundamental technical challenge in computational aeroelasticity is the accurate prediction of unsteady aerodynamic phenomena and the effect on the aeroelastic response of a vehicle. Currently, a benchmarking standard for use in validating the accuracy of computational aeroelasticity codes does not exist. Many aeroelastic data sets have been obtained in wind-tunnel and flight testing throughout the world; however, none have been globally presented or accepted as an ideal data set. There are numerous reasons for this. One reason is that often, such aeroelastic data sets focus on the aeroelastic phenomena alone (flutter, for example) and do not contain associated information such as unsteady pressures and time-correlated structural dynamic deflections. Other available data sets focus solely on the unsteady pressures and do not address the aeroelastic phenomena. Other discrepancies can include omission of relevant data, such as flutter frequency and / or the acquisition of only qualitative deflection data. In addition to these content deficiencies, all of the available data sets present both experimental and computational technical challenges. Experimental issues include facility influences, nonlinearities beyond those being modeled, and data processing. From the computational perspective, technical challenges include modeling geometric complexities, coupling between the flow and the structure, grid issues, and boundary conditions. The Aeroelasticity Benchmark Assessment task seeks to examine the existing potential experimental data sets and ultimately choose the one that is viewed as the most suitable for computational benchmarking. An initial computational evaluation of that configuration will then be performed using the Langley-developed computational fluid dynamics (CFD) software FUN3D1 as part of its code validation process. In addition to the benchmarking activity, this task also includes an examination of future research directions. Researchers within the

  6. NASA Software Engineering Benchmarking Study

    NASA Technical Reports Server (NTRS)

    Rarick, Heather L.; Godfrey, Sara H.; Kelly, John C.; Crumbley, Robert T.; Wifl, Joel M.

    2013-01-01

    To identify best practices for the improvement of software engineering on projects, NASA's Offices of Chief Engineer (OCE) and Safety and Mission Assurance (OSMA) formed a team led by Heather Rarick and Sally Godfrey to conduct this benchmarking study. The primary goals of the study are to identify best practices that: Improve the management and technical development of software intensive systems; Have a track record of successful deployment by aerospace industries, universities [including research and development (R&D) laboratories], and defense services, as well as NASA's own component Centers; and Identify candidate solutions for NASA's software issues. Beginning in the late fall of 2010, focus topics were chosen and interview questions were developed, based on the NASA top software challenges. Between February 2011 and November 2011, the Benchmark Team interviewed a total of 18 organizations, consisting of five NASA Centers, five industry organizations, four defense services organizations, and four university or university R and D laboratory organizations. A software assurance representative also participated in each of the interviews to focus on assurance and software safety best practices. Interviewees provided a wealth of information on each topic area that included: software policy, software acquisition, software assurance, testing, training, maintaining rigor in small projects, metrics, and use of the Capability Maturity Model Integration (CMMI) framework, as well as a number of special topics that came up in the discussions. NASA's software engineering practices compared favorably with the external organizations in most benchmark areas, but in every topic, there were ways in which NASA could improve its practices. Compared to defense services organizations and some of the industry organizations, one of NASA's notable weaknesses involved communication with contractors regarding its policies and requirements for acquired software. One of NASA's strengths

  7. A Heterogeneous Medium Analytical Benchmark

    SciTech Connect

    Ganapol, B.D.

    1999-09-27

    A benchmark, called benchmark BLUE, has been developed for one-group neutral particle (neutron or photon) transport in a one-dimensional sub-critical heterogeneous plane parallel medium with surface illumination. General anisotropic scattering is accommodated through the Green's Function Method (GFM). Numerical Fourier transform inversion is used to generate the required Green's functions which are kernels to coupled integral equations that give the exiting angular fluxes. The interior scalar flux is then obtained through quadrature. A compound iterative procedure for quadrature order and slab surface source convergence provides highly accurate benchmark qualities (4- to 5- places of accuracy) results.

  8. Phase-covariant quantum benchmarks

    SciTech Connect

    Calsamiglia, J.; Aspachs, M.; Munoz-Tapia, R.; Bagan, E.

    2009-05-15

    We give a quantum benchmark for teleportation and quantum storage experiments suited for pure and mixed test states. The benchmark is based on the average fidelity over a family of phase-covariant states and certifies that an experiment cannot be emulated by a classical setup, i.e., by a measure-and-prepare scheme. We give an analytical solution for qubits, which shows important differences with standard state estimation approach, and compute the value of the benchmark for coherent and squeezed states, both pure and mixed.

  9. California commercial building energy benchmarking

    SciTech Connect

    Kinney, Satkartar; Piette, Mary Ann

    2003-07-01

    Building energy benchmarking is the comparison of whole-building energy use relative to a set of similar buildings. It provides a useful starting point for individual energy audits and for targeting buildings for energy-saving measures in multiple-site audits. Benchmarking is of interest and practical use to a number of groups. Energy service companies and performance contractors communicate energy savings potential with ''typical'' and ''best-practice'' benchmarks while control companies and utilities can provide direct tracking of energy use and combine data from multiple buildings. Benchmarking is also useful in the design stage of a new building or retrofit to determine if a design is relatively efficient. Energy managers and building owners have an ongoing interest in comparing energy performance to others. Large corporations, schools, and government agencies with numerous facilities also use benchmarking methods to compare their buildings to each other. The primary goal of Task 2.1.1 Web-based Benchmarking was the development of a web-based benchmarking tool, dubbed Cal-Arch, for benchmarking energy use in California commercial buildings. While there were several other benchmarking tools available to California consumers prior to the development of Cal-Arch, there were none that were based solely on California data. Most available benchmarking information, including the Energy Star performance rating, were developed using DOE's Commercial Building Energy Consumption Survey (CBECS), which does not provide state-level data. Each database and tool has advantages as well as limitations, such as the number of buildings and the coverage by type, climate regions and end uses. There is considerable commercial interest in benchmarking because it provides an inexpensive method of screening buildings for tune-ups and retrofits. However, private companies who collect and manage consumption data are concerned that the identities of building owners might be revealed and

  10. A Survey of Mental Health Research Priorities in Low- and Middle-Income Countries of Africa, Asia, and Latin America and the Caribbean

    PubMed Central

    Sharan, P; Gallo, C; Gureje, O; Lamberte, E; Mari, JJ; Mazzotti, G; Patel, V; Swartz, L; Olifson, S; Levav, I; de Francisco, A; Saxena, S

    2012-01-01

    Background Studies suggest a paucity of and lack of prioritization in mental health research output from low- and middle-income (LAMI) countries. Aims To investigate research priorities in mental health among researchers and other stakeholders in LAMI countries. Method A two-stage design that included enumeration (through literature searches and snowball technique) of researchers and stakeholders in 114 countries of Africa, Asia and Latin America and the Caribbean; and a mail survey on priority research. Results The study revealed broad agreement between researchers and stakeholders and across regions regarding the priorities for mental health research, however, stakeholders did not consider researchers' personal interest as an important criterion for prioritizing research. Studies on epidemiology (burden and risk factors), health systems, and social science were the highest ranked types of needed research. The three prioritized disorders were depression/anxiety, substance use disorders, and psychoses, while prioritized population groups were children and adolescents, women, and persons exposed to violence/trauma. Important criteria for prioritizing research were burden of disease, social justice, and availability of funds. Researchers' and stakeholders' priorities were largely consistent with burden of disease estimates (however, suicide was under-prioritized) and partly congruent with the research projects of the responding researchers. Conclusions The broad agreement found between a large and reasonably representative group of active researchers and stakeholders provides a basis for generating policy and service relevant evidence for global mental health. PMID:19794206

  11. A comparative analysis of biomedical research ethics regulation systems in Europe and Latin America with regard to the protection of human subjects.

    PubMed

    Lamas, Eugenia; Ferrer, Marcela; Molina, Alberto; Salinas, Rodrigo; Hevia, Adriana; Bota, Alexandre; Feinholz, Dafna; Fuchs, Michael; Schramm, Roland; Tealdi, Juan-Carlos; Zorrilla, Sergio

    2010-12-01

    The European project European and Latin American Systems of Ethics Regulation of Biomedical Research Project (EULABOR) has carried out the first comparative analysis of ethics regulation systems for biomedical research in seven countries in Europe and Latin America, evaluating their roles in the protection of human subjects. We developed a conceptual and methodological framework defining 'ethics regulation system for biomedical research' as a set of actors, institutions, codes and laws involved in overseeing the ethics of biomedical research on humans. This framework allowed us to develop comprehensive national reports by conducting semi-structured interviews to key informants. These reports were summarised and analysed in a comparative analysis. The study showed that the regulatory framework for clinical research in these countries differ in scope. It showed that despite the different political contexts, actors involved and motivations for creating the regulation, in most of the studied countries it was the government who took the lead in setting up the system. The study also showed that Europe and Latin America are similar regarding national bodies and research ethics committees, but the Brazilian system has strong and noteworthy specificities.

  12. Data-Intensive Benchmarking Suite

    SciTech Connect

    2008-11-26

    The Data-Intensive Benchmark Suite is a set of programs written for the study of data-or storage-intensive science and engineering problems, The benchmark sets cover: general graph searching (basic and Hadoop Map/Reduce breadth-first search), genome sequence searching, HTTP request classification (basic and Hadoop Map/Reduce), low-level data communication, and storage device micro-beachmarking

  13. Benchmarking hypercube hardware and software

    NASA Technical Reports Server (NTRS)

    Grunwald, Dirk C.; Reed, Daniel A.

    1986-01-01

    It was long a truism in computer systems design that balanced systems achieve the best performance. Message passing parallel processors are no different. To quantify the balance of a hypercube design, an experimental methodology was developed and the associated suite of benchmarks was applied to several existing hypercubes. The benchmark suite includes tests of both processor speed in the absence of internode communication and message transmission speed as a function of communication patterns.

  14. Pressing Forward: Increasing and Expanding Rigor and Relevance in America's High Schools. Research on High School and Beyond

    ERIC Educational Resources Information Center

    Smerdon, Becky, Ed.; Borman, Kathryn M., Ed.

    2012-01-01

    Pressing Forward: Increasing and Expanding Rigor and Relevance in America's High Schools is organized to place secondary education, specifically the goals of preparing young adults to be college and career ready, in contemporary perspective, emphasizing the changing global economy and trends in policy and practice. High school students must be…

  15. Violence in America. Proceedings of the Southwest Regional Research Conference (Dallas, Texas, November 6-8, 1986).

    ERIC Educational Resources Information Center

    Reed, Annette Zimmern, Ed.; Sullivan, Jane C., Ed.

    The conference reported in this document consisted of three symposia and eight workshops each concerned with a different area of violence in America. The document includes an introduction by Annette Zimmern Reed and opening remarks by Dallas mayor Starke Taylor and his wife, Carolyn Taylor. Information from the three symposia is given in the areas…

  16. National Conference on Asians in America and Asian Americans. Sponsored by the Asian American Assembly for Policy Research.

    ERIC Educational Resources Information Center

    City Univ. of New York, NY. City Coll. Dept. of Asian Studies.

    In this report, the activities of a conference on Asian Americans and Asians in America are summarized and papers presented are reprinted. Topics considered in the papers include education, employment, affirmative action, identity, pluralism, Chinese cultural background, teaching of English, cross-cultural situations, development of comprehensive…

  17. Muslims in America: An Exploratory Study of Universal and Mental Health Values. Final Report for 1992-1994 Research Project.

    ERIC Educational Resources Information Center

    Kelly, Eugene W., Jr.; And Others

    Muslims now constitute a large and growing segment of American society. This project was an exploratory study whose purpose was to obtain a preliminary picture of counseling-relevant values of Muslims in America. The study obtained a preliminary value profile of American Muslims in two significant value areas: universal values and mental health…

  18. Benchmarks for GADRAS performance validation.

    SciTech Connect

    Mattingly, John K.; Mitchell, Dean James; Rhykerd, Charles L., Jr.

    2009-09-01

    The performance of the Gamma Detector Response and Analysis Software (GADRAS) was validated by comparing GADRAS model results to experimental measurements for a series of benchmark sources. Sources for the benchmark include a plutonium metal sphere, bare and shielded in polyethylene, plutonium oxide in cans, a highly enriched uranium sphere, bare and shielded in polyethylene, a depleted uranium shell and spheres, and a natural uranium sphere. The benchmark experimental data were previously acquired and consist of careful collection of background and calibration source spectra along with the source spectra. The calibration data were fit with GADRAS to determine response functions for the detector in each experiment. A one-dimensional model (pie chart) was constructed for each source based on the dimensions of the benchmark source. The GADRAS code made a forward calculation from each model to predict the radiation spectrum for the detector used in the benchmark experiment. The comparisons between the GADRAS calculation and the experimental measurements are excellent, validating that GADRAS can correctly predict the radiation spectra for these well-defined benchmark sources.

  19. Capacity Building for Sustainable Seismological Networks in the Americas: A Pan-American Advanced Studies Institute on New Frontiers in Seismological Research

    NASA Astrophysics Data System (ADS)

    Cabello, O. A.; Meltzer, A.; Sandvol, E. A.; Yepes, H.; Ruiz, M. C.; Barrientos, S. E.; Willemann, R. J.

    2011-12-01

    During July 2011, a Pan-American Advanced Studies Institute, "New Frontiers in Seismological Research: Sustainable Networks, Earthquake Source Parameters, and Earth Structure" was conducted in Quito Ecuador with participants from the US, Central, and South America, and the Caribbean at early stages in their scientific careers. This advanced studies institute was imparted by fifteen volunteer senior faculty and investigators from the U.S. and the Americas. The curriculum addressed the importance of developing and maintaining modern seismological observatories, reviewed the principles of sustainable network operations, and explored recent advances in the analysis of seismological data in support of basic research, education, and hazard mitigation. An additional goal was to develop future international research collaborations. The Institute engaged graduate students, post-doctoral students, and new faculty from across the Americas in an interactive collaborative learning environment including modules on double-difference earthquake location and tomography, regional centroid-moment tensors, and event-based and ambient noise surface wave dispersion and tomography. Under the faculty guidance, participants started promising research projects about surface wave tomography in southeastern Brazil, near the Chilean triple junction, in central Chilean Andes, at the Peru-Chile border, within Peru, at a volcano in Ecuador, in the Caribbean Sea region, and near the Mendocino triple junction. Other participants started projects about moment tensors of earthquakes in or near Brazil, Chile and Argentina, Costa Rica, Ecuador, Puerto Rico, western Mexico, and northern Mexico. In order to track the progress of the participants and measure the overall effectiveness of the Institute a reunion is planned where the PASI alumni will present the result of their research that was initiated in Quito

  20. How to avoid 'death by benchmarking'.

    PubMed

    Wofford, Dave; Libby, Darin

    2015-08-01

    Hospitals and health systems should adopt four key principles and practices when applying benchmarks to determine physician compensation: Acknowledge that a lower percentile may be appropriate. Use the median as the all-in benchmark. Use peer benchmarks when available. Use alternative benchmarks.

  1. Benchmarking for the Learning and Skills Sector.

    ERIC Educational Resources Information Center

    Owen, Jane

    This document is designed to introduce practitioners in the United Kingdom's learning and skills sector to the principles and practice of benchmarking. The first section defines benchmarking and differentiates metric, diagnostic, and process benchmarking. The remainder of the booklet details the following steps of the benchmarking process: (1) get…

  2. Benchmark Solutions for Computational Aeroacoustics (CAA) Code Validation

    NASA Technical Reports Server (NTRS)

    Scott, James R.

    2004-01-01

    NASA has conducted a series of Computational Aeroacoustics (CAA) Workshops on Benchmark Problems to develop a set of realistic CAA problems that can be used for code validation. In the Third (1999) and Fourth (2003) Workshops, the single airfoil gust response problem, with real geometry effects, was included as one of the benchmark problems. Respondents were asked to calculate the airfoil RMS pressure and far-field acoustic intensity for different airfoil geometries and a wide range of gust frequencies. This paper presents the validated that have been obtained to the benchmark problem, and in addition, compares them with classical flat plate results. It is seen that airfoil geometry has a strong effect on the airfoil unsteady pressure, and a significant effect on the far-field acoustic intensity. Those parts of the benchmark problem that have not yet been adequately solved are identified and presented as a challenge to the CAA research community.

  3. Using benchmarks for radiation testing of microprocessors and FPGAs

    DOE PAGES

    Quinn, Heather; Robinson, William H.; Rech, Paolo; ...

    2015-12-01

    Performance benchmarks have been used over the years to compare different systems. These benchmarks can be useful for researchers trying to determine how changes to the technology, architecture, or compiler affect the system's performance. No such standard exists for systems deployed into high radiation environments, making it difficult to assess whether changes in the fabrication process, circuitry, architecture, or software affect reliability or radiation sensitivity. In this paper, we propose a benchmark suite for high-reliability systems that is designed for field-programmable gate arrays and microprocessors. As a result, we describe the development process and report neutron test data for themore » hardware and software benchmarks.« less

  4. BN-600 full MOX core benchmark analysis.

    SciTech Connect

    Kim, Y. I.; Hill, R. N.; Grimm, K.; Rimpault, G.; Newton, T.; Li, Z. H.; Rineiski, A.; Mohanakrishan, P.; Ishikawa, M.; Lee, K. B.; Danilytchev, A.; Stogov, V.; Nuclear Engineering Division; International Atomic Energy Agency; CEA SERCO Assurance; China Inst. of Atomic Energy; Forschnungszentrum Karlsruhe; Indira Gandhi Centre for Atomic Research; Japan Nuclear Cycle Development Inst.; Korea Atomic Energy Research Inst.; Inst. of Physics and Power Engineering

    2004-01-01

    As a follow-up of the BN-600 hybrid core benchmark, a full MOX core benchmark was performed within the framework of the IAEA co-ordinated research project. Discrepancies between the values of main reactivity coefficients obtained by the participants for the BN-600 full MOX core benchmark appear to be larger than those in the previous hybrid core benchmarks on traditional core configurations. This arises due to uncertainties in the proper modelling of the axial sodium plenum above the core. It was recognized that the sodium density coefficient strongly depends on the core model configuration of interest (hybrid core vs. fully MOX fuelled core with sodium plenum above the core) in conjunction with the calculation method (diffusion vs. transport theory). The effects of the discrepancies revealed between the participants results on the ULOF and UTOP transient behaviours of the BN-600 full MOX core were investigated in simplified transient analyses. Generally the diffusion approximation predicts more benign consequences for the ULOF accident but more hazardous ones for the UTOP accident when compared with the transport theory results. The heterogeneity effect does not have any significant effect on the simulation of the transient. The comparison of the transient analyses results concluded that the fuel Doppler coefficient and the sodium density coefficient are the two most important coefficients in understanding the ULOF transient behaviour. In particular, the uncertainty in evaluating the sodium density coefficient distribution has the largest impact on the description of reactor dynamics. This is because the maximum sodium temperature rise takes place at the top of the core and in the sodium plenum.

  5. A Quantitative Methodology for Determining the Critical Benchmarks for Project 2061 Strand Maps

    ERIC Educational Resources Information Center

    Kuhn, G.

    2008-01-01

    The American Association for the Advancement of Science (AAAS) was tasked with identifying the key science concepts for science literacy in K-12 students in America (AAAS, 1990, 1993). The AAAS Atlas of Science Literacy (2001) has organized roughly half of these science concepts or benchmarks into fifty flow charts. Each flow chart or strand map…

  6. Benchmarking Teacher Practice in Queensland Transition Programs for Youth with Intellectual Disability and Autism

    ERIC Educational Resources Information Center

    Beamish, Wendi; Meadows, Denis; Davies, Michael

    2012-01-01

    Extensive work has been done in North America to examine practices recommended to facilitate postschool transitions for youth with disabilities. Few studies in Australia, however, have investigated these practices. This study drew on the Taxonomy for Transition Programming developed by Kohler to benchmark practice at government and nongovernment…

  7. Building America Best Practices Series Volume 12: Builders Challenge Guide to 40% Whole-House Energy Savings in the Cold and Very Cold Climates

    SciTech Connect

    Baechler, Michael C.; Gilbride, Theresa L.; Hefty, Marye G.; Cole, Pamala C.; Love, Pat M.

    2011-02-01

    This best practices guide is the twelfth in a series of guides for builders produced by PNNL for the U.S. Department of Energy’s Building America program. This guide book is a resource to help builders design and construct homes that are among the most energy-efficient available, while addressing issues such as building durability, indoor air quality, and occupant health, safety, and comfort. With the measures described in this guide, builders in the cold and very cold climates can build homes that have whole-house energy savings of 40% over the Building America benchmark with no added overall costs for consumers. The best practices described in this document are based on the results of research and demonstration projects conducted by Building America’s research teams. Building America brings together the nation’s leading building scientists with over 300 production builders to develop, test, and apply innovative, energy-efficient construction practices. Building America builders have found they can build homes that meet these aggressive energy-efficiency goals at no net increased costs to the homeowners. Currently, Building America homes achieve energy savings of 40% greater than the Building America benchmark home (a home built to mid-1990s building practices roughly equivalent to the 1993 Model Energy Code). The recommendations in this document meet or exceed the requirements of the 2009 IECC and 2009 IRC and thos erequirements are highlighted in the text. This document will be distributed via the DOE Building America website: www.buildingamerica.gov.

  8. Benchmark simulation models, quo vadis?

    PubMed

    Jeppsson, U; Alex, J; Batstone, D J; Benedetti, L; Comas, J; Copp, J B; Corominas, L; Flores-Alsina, X; Gernaey, K V; Nopens, I; Pons, M-N; Rodríguez-Roda, I; Rosen, C; Steyer, J-P; Vanrolleghem, P A; Volcke, E I P; Vrecko, D

    2013-01-01

    As the work of the IWA Task Group on Benchmarking of Control Strategies for wastewater treatment plants (WWTPs) is coming to an end, it is essential to disseminate the knowledge gained. For this reason, all authors of the IWA Scientific and Technical Report on benchmarking have come together to provide their insights, highlighting areas where knowledge may still be deficient and where new opportunities are emerging, and to propose potential avenues for future development and application of the general benchmarking framework and its associated tools. The paper focuses on the topics of temporal and spatial extension, process modifications within the WWTP, the realism of models, control strategy extensions and the potential for new evaluation tools within the existing benchmark system. We find that there are major opportunities for application within all of these areas, either from existing work already being done within the context of the benchmarking simulation models (BSMs) or applicable work in the wider literature. Of key importance is increasing capability, usability and transparency of the BSM package while avoiding unnecessary complexity.

  9. Textbook America.

    ERIC Educational Resources Information Center

    Karp, Walter

    1980-01-01

    Focuses on how political attitudes have been influenced by American history textbooks at various times throughout history. Excerpts from traditional and revisionist textbooks are presented, with emphasis on "America Revised" by Frances FitzGerald. Journal available from Harper's Magazine Co., 2 Park Ave., New York, NY 10016. (DB)

  10. Latin America.

    ERIC Educational Resources Information Center

    Greenfield, Gerald Michael

    1986-01-01

    Notes the problematical elements of diversity within Latin America, establishes priorities for the social studies curriculum, and reviews what should be taught about its geography, resources, people, religion, customs, economics, politics, history, and international relationships. Lists Latin American Studies programs and published instructional…

  11. America Revising.

    ERIC Educational Resources Information Center

    Marty, Myron

    1982-01-01

    Reviews Frances FitzGerald's book, "America Revised," and discusses FitzGerald's critique of recent revisions in secondary-level U.S. history textbooks. The author advocates the implementation of a core curriculum for U.S. history which emphasizes political and local history. (AM)

  12. Empirical Benchmarks of Hidden Bias in Educational Research: Implication for Assessing How well Propensity Score Methods Approximate Experiments and Conducting Sensitivity Analysis

    ERIC Educational Resources Information Center

    Dong, Nianbo; Lipsey, Mark

    2014-01-01

    When randomized control trials (RCT) are not feasible, researchers seek other methods to make causal inference, e.g., propensity score methods. One of the underlined assumptions for the propensity score methods to obtain unbiased treatment effect estimates is the ignorability assumption, that is, conditional on the propensity score, treatment…

  13. FLOWTRAN-TF code benchmarking

    SciTech Connect

    Flach, G.P.

    1990-12-01

    FLOWTRAN-TF is a two-component (air-water), two-phase thermal-hydraulics code designed for performing accident analyses of SRS reactor fuel assemblies during the Emergency Cooling System (ECS) phase of a Double Ended Guillotine Break (DEGB) Loss Of Coolant Accident (LOCA). A description of the code is given by Flach et al. (1990). This report provides benchmarking results for the version of FLOWTRAN-TF used to compute the Recommended K-Reactor Restart ECS Power Limit (Smith et al., 1990a; 1990b). Individual constitutive relations are benchmarked in Sections 2 through 5 while in Sections 6 and 7 integral code benchmarking results are presented. An overall assessment of FLOWTRAN-TF for its intended use in computing the ECS power limit completes the document.

  14. Building America Top Innovations 2013 Profile – Building America Solution Center

    SciTech Connect

    none,

    2013-09-01

    This Top Innovation profile provides information about the Building America Solution Center created by Pacific Northwest National Laboratory, a web tool connecting users to thousands of pieces of building science information developed by DOE’s Building America research partners.

  15. Radiation Detection Computational Benchmark Scenarios

    SciTech Connect

    Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.; McDonald, Ben S.

    2013-09-24

    Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing different techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to PNNL for

  16. Towards Systematic Benchmarking of Climate Model Performance

    NASA Astrophysics Data System (ADS)

    Gleckler, P. J.

    2014-12-01

    The process by which climate models are evaluated has evolved substantially over the past decade, with the Coupled Model Intercomparison Project (CMIP) serving as a centralizing activity for coordinating model experimentation and enabling research. Scientists with a broad spectrum of expertise have contributed to the CMIP model evaluation process, resulting in many hundreds of publications that have served as a key resource for the IPCC process. For several reasons, efforts are now underway to further systematize some aspects of the model evaluation process. First, some model evaluation can now be considered routine and should not require "re-inventing the wheel" or a journal publication simply to update results with newer models. Second, the benefit of CMIP research to model development has not been optimal because the publication of results generally takes several years and is usually not reproducible for benchmarking newer model versions. And third, there are now hundreds of model versions and many thousands of simulations, but there is no community-based mechanism for routinely monitoring model performance changes. An important change in the design of CMIP6 can help address these limitations. CMIP6 will include a small set standardized experiments as an ongoing exercise (CMIP "DECK": ongoing Diagnostic, Evaluation and Characterization of Klima), so that modeling groups can submit them at any time and not be overly constrained by deadlines. In this presentation, efforts to establish routine benchmarking of existing and future CMIP simulations will be described. To date, some benchmarking tools have been made available to all CMIP modeling groups to enable them to readily compare with CMIP5 simulations during the model development process. A natural extension of this effort is to make results from all CMIP simulations widely available, including the results from newer models as soon as the simulations become available for research. Making the results from routine

  17. Benchmark Evaluation of the NRAD Reactor LEU Core Startup Measurements

    SciTech Connect

    J. D. Bess; T. L. Maddock; M. A. Marshall

    2011-09-01

    The Neutron Radiography (NRAD) reactor is a 250-kW TRIGA-(Training, Research, Isotope Production, General Atomics)-conversion-type reactor at the Idaho National Laboratory; it is primarily used for neutron radiography analysis of irradiated and unirradiated fuels and materials. The NRAD reactor was converted from HEU to LEU fuel with 60 fuel elements and brought critical on March 31, 2010. This configuration of the NRAD reactor has been evaluated as an acceptable benchmark experiment and is available in the 2011 editions of the International Handbook of Evaluated Criticality Safety Benchmark Experiments (ICSBEP Handbook) and the International Handbook of Evaluated Reactor Physics Benchmark Experiments (IRPhEP Handbook). Significant effort went into precisely characterizing all aspects of the reactor core dimensions and material properties; detailed analyses of reactor parameters minimized experimental uncertainties. The largest contributors to the total benchmark uncertainty were the 234U, 236U, Er, and Hf content in the fuel; the manganese content in the stainless steel cladding; and the unknown level of water saturation in the graphite reflector blocks. A simplified benchmark model of the NRAD reactor was prepared with a keff of 1.0012 {+-} 0.0029 (1s). Monte Carlo calculations with MCNP5 and KENO-VI and various neutron cross section libraries were performed and compared with the benchmark eigenvalue for the 60-fuel-element core configuration; all calculated eigenvalues are between 0.3 and 0.8% greater than the benchmark value. Benchmark evaluations of the NRAD reactor are beneficial in understanding biases and uncertainties affecting criticality safety analyses of storage, handling, or transportation applications with LEU-Er-Zr-H fuel.

  18. Current Reactor Physics Benchmark Activities at the Idaho National Laboratory

    SciTech Connect

    John D. Bess; Margaret A. Marshall; Mackenzie L. Gorham; Joseph Christensen; James C. Turnbull; Kim Clark

    2011-11-01

    The International Reactor Physics Experiment Evaluation Project (IRPhEP) [1] and the International Criticality Safety Benchmark Evaluation Project (ICSBEP) [2] were established to preserve integral reactor physics and criticality experiment data for present and future research. These valuable assets provide the basis for recording, developing, and validating our integral nuclear data, and experimental and computational methods. These projects are managed through the Idaho National Laboratory (INL) and the Organisation for Economic Co-operation and Development Nuclear Energy Agency (OECD-NEA). Staff and students at the Department of Energy - Idaho (DOE-ID) and INL are engaged in the development of benchmarks to support ongoing research activities. These benchmarks include reactors or assemblies that support Next Generation Nuclear Plant (NGNP) research, space nuclear Fission Surface Power System (FSPS) design validation, and currently operational facilities in Southeastern Idaho.

  19. Bolivia. America = Las Americas [Series].

    ERIC Educational Resources Information Center

    Toro, Leonor; Avery, Robert S.

    Written for teachers to use with migrant children in elementary grades and to highlight the many Americas, this bilingual English/Spanish social studies resource booklet provides historical and cultural information on Bolivia. A table of contents indicates the language--Spanish or English--in which the topics are written. The quarterly provides an…

  20. Colombia. America = Las Americas [Series].

    ERIC Educational Resources Information Center

    Toro, Leonor; Doran, Sandra

    Written for teachers to use with migrant children in elementary grades to highlight the many Americas, this bilingual English/Spanish social studies resource booklet provides historical and cultural background information on Colombia and features biographies of Colombian leaders and artists. A table of contents indicates the language--Spanish or…

  1. PRISMATIC CORE COUPLED TRANSIENT BENCHMARK

    SciTech Connect

    J. Ortensi; M.A. Pope; G. Strydom; R.S. Sen; M.D. DeHart; H.D. Gougar; C. Ellis; A. Baxter; V. Seker; T.J. Downar; K. Vierow; K. Ivanov

    2011-06-01

    The Prismatic Modular Reactor (PMR) is one of the High Temperature Reactor (HTR) design concepts that have existed for some time. Several prismatic units have operated in the world (DRAGON, Fort St. Vrain, Peach Bottom) and one unit is still in operation (HTTR). The deterministic neutronics and thermal-fluids transient analysis tools and methods currently available for the design and analysis of PMRs have lagged behind the state of the art compared to LWR reactor technologies. This has motivated the development of more accurate and efficient tools for the design and safety evaluations of the PMR. In addition to the work invested in new methods, it is essential to develop appropriate benchmarks to verify and validate the new methods in computer codes. The purpose of this benchmark is to establish a well-defined problem, based on a common given set of data, to compare methods and tools in core simulation and thermal hydraulics analysis with a specific focus on transient events. The benchmark-working group is currently seeking OECD/NEA sponsorship. This benchmark is being pursued and is heavily based on the success of the PBMR-400 exercise.

  2. Engine Benchmarking - Final CRADA Report

    SciTech Connect

    Wallner, Thomas

    2016-01-01

    Detailed benchmarking of the powertrains of three light-duty vehicles was performed. Results were presented and provided to CRADA partners. The vehicles included a MY2011 Audi A4, a MY2012 Mini Cooper and a MY2014 Nissan Versa.

  3. A comparison of five benchmarks

    NASA Technical Reports Server (NTRS)

    Huss, Janice E.; Pennline, James A.

    1987-01-01

    Five benchmark programs were obtained and run on the NASA Lewis CRAY X-MP/24. A comparison was made between the programs codes and between the methods for calculating performance figures. Several multitasking jobs were run to gain experience in how parallel performance is measured.

  4. Benchmark Lisp And Ada Programs

    NASA Technical Reports Server (NTRS)

    Davis, Gloria; Galant, David; Lim, Raymond; Stutz, John; Gibson, J.; Raghavan, B.; Cheesema, P.; Taylor, W.

    1992-01-01

    Suite of nonparallel benchmark programs, ELAPSE, designed for three tests: comparing efficiency of computer processing via Lisp vs. Ada; comparing efficiencies of several computers processing via Lisp; or comparing several computers processing via Ada. Tests efficiency which computer executes routines in each language. Available for computer equipped with validated Ada compiler and/or Common Lisp system.

  5. Processor Emulator with Benchmark Applications

    SciTech Connect

    Lloyd, G. Scott; Pearce, Roger; Gokhale, Maya

    2015-11-13

    A processor emulator and a suite of benchmark applications have been developed to assist in characterizing the performance of data-centric workloads on current and future computer architectures. Some of the applications have been collected from other open source projects. For more details on the emulator and an example of its usage, see reference [1].

  6. Benchmark 3 - Incremental sheet forming

    NASA Astrophysics Data System (ADS)

    Elford, Michael; Saha, Pradip; Seong, Daeyong; Haque, MD Ziaul; Yoon, Jeong Whan

    2013-12-01

    Benchmark-3 is designed to predict strains, punch load and deformed profile after spring-back during single tool incremental sheet forming. AA 7075-O material has been selected. A corn shape is formed to 45 mm depth with an angle of 45°. Problem description, material properties, and simulation reports with experimental data are summarized.

  7. Benchmarks for measurement of duplicate detection methods in nucleotide databases.

    PubMed

    Chen, Qingyu; Zobel, Justin; Verspoor, Karin

    2017-01-08

    Duplication of information in databases is a major data quality challenge. The presence of duplicates, implying either redundancy or inconsistency, can have a range of impacts on the quality of analyses that use the data. To provide a sound basis for research on this issue in databases of nucleotide sequences, we have developed new, large-scale validated collections of duplicates, which can be used to test the effectiveness of duplicate detection methods. Previous collections were either designed primarily to test efficiency, or contained only a limited number of duplicates of limited kinds. To date, duplicate detection methods have been evaluated on separate, inconsistent benchmarks, leading to results that cannot be compared and, due to limitations of the benchmarks, of questionable generality. In this study, we present three nucleotide sequence database benchmarks, based on information drawn from a range of resources, including information derived from mapping to two data sections within the UniProt Knowledgebase (UniProtKB), UniProtKB/Swiss-Prot and UniProtKB/TrEMBL. Each benchmark has distinct characteristics. We quantify these characteristics and argue for their complementary value in evaluation. The benchmarks collectively contain a vast number of validated biological duplicates; the largest has nearly half a billion duplicate pairs (although this is probably only a tiny fraction of the total that is present). They are also the first benchmarks targeting the primary nucleotide databases. The records include the 21 most heavily studied organisms in molecular biology research. Our quantitative analysis shows that duplicates in the different benchmarks, and in different organisms, have different characteristics. It is thus unreliable to evaluate duplicate detection methods against any single benchmark. For example, the benchmark derived from UniProtKB/Swiss-Prot mappings identifies more diverse types of duplicates, showing the importance of expert curation, but

  8. Benchmarking on Tsunami Currents with ComMIT

    NASA Astrophysics Data System (ADS)

    Sharghi vand, N.; Kanoglu, U.

    2015-12-01

    There were no standards for the validation and verification of tsunami numerical models before 2004 Indian Ocean tsunami. Even, number of numerical models has been used for inundation mapping effort, evaluation of critical structures, etc. without validation and verification. After 2004, NOAA Center for Tsunami Research (NCTR) established standards for the validation and verification of tsunami numerical models (Synolakis et al. 2008 Pure Appl. Geophys. 165, 2197-2228), which will be used evaluation of critical structures such as nuclear power plants against tsunami attack. NCTR presented analytical, experimental and field benchmark problems aimed to estimate maximum runup and accepted widely by the community. Recently, benchmark problems were suggested by the US National Tsunami Hazard Mitigation Program Mapping & Modeling Benchmarking Workshop: Tsunami Currents on February 9-10, 2015 at Portland, Oregon, USA (http://nws.weather.gov/nthmp/index.html). These benchmark problems concentrated toward validation and verification of tsunami numerical models on tsunami currents. Three of the benchmark problems were: current measurement of the Japan 2011 tsunami in Hilo Harbor, Hawaii, USA and in Tauranga Harbor, New Zealand, and single long-period wave propagating onto a small-scale experimental model of the town of Seaside, Oregon, USA. These benchmark problems were implemented in the Community Modeling Interface for Tsunamis (ComMIT) (Titov et al. 2011 Pure Appl. Geophys. 168, 2121-2131), which is a user-friendly interface to the validated and verified Method of Splitting Tsunami (MOST) (Titov and Synolakis 1995 J. Waterw. Port Coastal Ocean Eng. 121, 308-316) model and is developed by NCTR. The modeling results are compared with the required benchmark data, providing good agreements and results are discussed. Acknowledgment: The research leading to these results has received funding from the European Union's Seventh Framework Programme (FP7/2007-2013) under grant

  9. Junctional epidermolysis bullosa incidence and survival: 5-year experience of the Dystrophic Epidermolysis Bullosa Research Association of America (DebRA) nurse educator, 2007 to 2011.

    PubMed

    Kelly-Mancuso, Geraldine; Kopelan, Brett; Azizkhan, Richard G; Lucky, Anne W

    2014-01-01

    Junctional epidermolysis bullosa (JEB) is a particularly devastating type of epidermolysis bullosa, especially in the newborn period. Data about the number of new cases of JEB in the United States were collected from the records of the Dystrophic Epidermolysis Bullosa Research Association of America (DebRA) nurse educator. Seventy-one children with JEB were reported to have been born in the 5 years between 2007 and 2011, reflecting an incidence of at least 3.59 per million per year, significantly higher than previously estimated (2.04 per million). There was a high prevalence of morbidity and infant mortality of at least 73%, as 52 of the 71 cases proved fatal by June 2012. These data emphasize the need for future research to develop treatment and ultimately a cure for this disorder.

  10. Benchmark specifications for EBR-II shutdown heat removal tests

    SciTech Connect

    Sofu, T.; Briggs, L. L.

    2012-07-01

    Argonne National Laboratory (ANL) is hosting an IAEA-coordinated research project on benchmark analyses of sodium-cooled fast reactor passive safety tests performed at the Experimental Breeder Reactor-II (EBR-II). The benchmark project involves analysis of a protected and an unprotected loss of flow tests conducted during an extensive testing program within the framework of the U.S. Integral Fast Reactor program to demonstrate the inherently safety features of EBR-II as a pool-type, sodium-cooled fast reactor prototype. The project is intended to improve the participants' design and safety analysis capabilities for sodium-cooled fast reactors through validation and qualification of safety analysis codes and methods. This paper provides a description of the EBR-II tests included in the program, and outlines the benchmark specifications being prepared to support the IAEA-coordinated research project. (authors)

  11. Pescara benchmarks: nonlinear identification

    NASA Astrophysics Data System (ADS)

    Gandino, E.; Garibaldi, L.; Marchesiello, S.

    2011-07-01

    Recent nonlinear methods are suitable for identifying large systems with lumped nonlinearities, but in practice most structural nonlinearities are distributed and an ideal nonlinear identification method should cater for them as well. In order to extend the current NSI method to be applied also on realistic large engineering structures, a modal counterpart of the method is proposed in this paper. The modal NSI technique is applied on one of the reinforced concrete beams that have been tested in Pescara, under the project titled "Monitoring and diagnostics of railway bridges by means of the analysis of the dynamic response due to train crossing", financed by Italian Ministry of Research. The beam showed a softening nonlinear behaviour, so that the nonlinearity concerning the first mode is characterized and its force contribution is quantified. Moreover, estimates for the modal parameters are obtained and the model is validated by comparing the measured and the reconstructed output. The identified estimates are also used to accurately predict the behaviour of the same beam, when subject to different initial conditions.

  12. Thermal Performance Benchmarking

    SciTech Connect

    Feng, Xuhui; Moreno, Gilbert; Bennion, Kevin

    2016-06-07

    The goal for this project is to thoroughly characterize the thermal performance of state-of-the-art (SOA) in-production automotive power electronics and electric motor thermal management systems. Information obtained from these studies will be used to: evaluate advantages and disadvantages of different thermal management strategies; establish baseline metrics for the thermal management systems; identify methods of improvement to advance the SOA; increase the publicly available information related to automotive traction-drive thermal management systems; help guide future electric drive technologies (EDT) research and development (R&D) efforts. The thermal performance results combined with component efficiency and heat generation information obtained by Oak Ridge National Laboratory (ORNL) may then be used to determine the operating temperatures for the EDT components under drive-cycle conditions. In FY16, the 2012 Nissan LEAF power electronics and 2014 Honda Accord Hybrid power electronics thermal management system were characterized. Comparison of the two power electronics thermal management systems was also conducted to provide insight into the various cooling strategies to understand the current SOA in thermal management for automotive power electronics and electric motors.

  13. Cancer clinical research in Latin America: current situation and opportunities. Expert opinion from the first ESMO workshop on clinical trials, Lima, 2015.

    PubMed

    Rolfo, Christian; Caglevic, Christian; Bretel, Denisse; Hong, David; Raez, Luis E; Cardona, Andres F; Oton, Ana B; Gomez, Henry; Dafni, Urania; Vallejos, Carlos; Zielinski, Christoph

    2016-01-01

    Latin America and the Caribbean have not yet developed strong clinical cancer research programmes. In order to improve this situation two international cancer organisations, the Latin American Society of Clinical Oncology (SLACOM) and the European Society of Medical Oncology (ESMO) worked closely with the Peruvian Cooperative Oncology Group (GECOPERU) and organised a clinical cancer research workshop held in Lima, Peru, in October 2015. Many oncologists from different Latin American countries participated in this gathering. The opportunities for and strengths of clinical oncology research in Latin American and Caribbean countries were identified as the widespread use of the Spanish language, the high cancer burden, growing access to information, improving patient education, access to new drugs for research centres, regional networks and human resources. However, there are still many weaknesses and problems including the long timeline for regulatory approval, lack of economic investment, lack of training and lack of personnel participating in clinical research, lack of cancer registries, insufficient technology and insufficient supplies for the diagnosis and treatment of cancer, few cancer specialists, low general levels of education and the negative attitude of government authorities towards clinical research.

  14. Cancer clinical research in Latin America: current situation and opportunities. Expert opinion from the first ESMO workshop on clinical trials, Lima, 2015

    PubMed Central

    Rolfo, Christian; Caglevic, Christian; Bretel, Denisse; Hong, David; Raez, Luis E; Cardona, Andres F; Oton, Ana B; Gomez, Henry; Dafni, Urania; Vallejos, Carlos; Zielinski, Christoph

    2016-01-01

    Latin America and the Caribbean have not yet developed strong clinical cancer research programmes. In order to improve this situation two international cancer organisations, the Latin American Society of Clinical Oncology (SLACOM) and the European Society of Medical Oncology (ESMO) worked closely with the Peruvian Cooperative Oncology Group (GECOPERU) and organised a clinical cancer research workshop held in Lima, Peru, in October 2015. Many oncologists from different Latin American countries participated in this gathering. The opportunities for and strengths of clinical oncology research in Latin American and Caribbean countries were identified as the widespread use of the Spanish language, the high cancer burden, growing access to information, improving patient education, access to new drugs for research centres, regional networks and human resources. However, there are still many weaknesses and problems including the long timeline for regulatory approval, lack of economic investment, lack of training and lack of personnel participating in clinical research, lack of cancer registries, insufficient technology and insufficient supplies for the diagnosis and treatment of cancer, few cancer specialists, low general levels of education and the negative attitude of government authorities towards clinical research. PMID:27843620

  15. [The impact of researchers loyal to Big Pharma on the ethics and quality of clinical trials in Latin America].

    PubMed

    Ugalde, Antonio; Homedes, Núria

    2015-03-01

    This article explains the difficulties innovative pharmaceutical firms have in repaying shareholders with attractive dividends. The problem is the result of the expiration of the patents of blockbuster drugs and the difficulties that the firms have in bringing new blockbuster drugs to the market. One of the solutions companies have found has been to accelerate the implementation of clinical trials in order to expedite the commercialization of new drugs. Doing so increases the period in which they can sell drugs at monopoly prices. We therefore discuss how innovative pharmaceutical firms shorten the implementation time of clinical trials in Latin America and the consequences such actions have on the quality of the collected data, the protection of human rights of the subjects of experimentation, and compliance with the ethical principles approved in international declarations.

  16. Benchmarking clinical photography services in the NHS.

    PubMed

    Arbon, Giles

    2015-01-01

    Benchmarking is used in services across the National Health Service (NHS) using various benchmarking programs. Clinical photography services do not have a program in place and services have to rely on ad hoc surveys of other services. A trial benchmarking exercise was undertaken with 13 services in NHS Trusts. This highlights valuable data and comparisons that can be used to benchmark and improve services throughout the profession.

  17. How Benchmarking and Higher Education Came Together

    ERIC Educational Resources Information Center

    Levy, Gary D.; Ronco, Sharron L.

    2012-01-01

    This chapter introduces the concept of benchmarking and how higher education institutions began to use benchmarking for a variety of purposes. Here, benchmarking is defined as a strategic and structured approach whereby an organization compares aspects of its processes and/or outcomes to those of another organization or set of organizations to…

  18. Testing (Validating?) Cross Sections with ICSBEP Benchmarks

    SciTech Connect

    Kahler, Albert C. III

    2012-06-28

    We discuss how to use critical benchmarks from the International Handbook of Evaluated Criticality Safety Benchmark Experiments to determine the applicability of specific cross sections to the end-user's problem of interest. Particular attention is paid to making sure the selected suite of benchmarks includes the user's range of applicability (ROA).

  19. Performance Evaluation and Benchmarking of Intelligent Systems

    SciTech Connect

    Madhavan, Raj; Messina, Elena; Tunstel, Edward

    2009-09-01

    To design and develop capable, dependable, and affordable intelligent systems, their performance must be measurable. Scientific methodologies for standardization and benchmarking are crucial for quantitatively evaluating the performance of emerging robotic and intelligent systems technologies. There is currently no accepted standard for quantitatively measuring the performance of these systems against user-defined requirements; and furthermore, there is no consensus on what objective evaluation procedures need to be followed to understand the performance of these systems. The lack of reproducible and repeatable test methods has precluded researchers working towards a common goal from exchanging and communicating results, inter-comparing system performance, and leveraging previous work that could otherwise avoid duplication and expedite technology transfer. Currently, this lack of cohesion in the community hinders progress in many domains, such as manufacturing, service, healthcare, and security. By providing the research community with access to standardized tools, reference data sets, and open source libraries of solutions, researchers and consumers will be able to evaluate the cost and benefits associated with intelligent systems and associated technologies. In this vein, the edited book volume addresses performance evaluation and metrics for intelligent systems, in general, while emphasizing the need and solutions for standardized methods. To the knowledge of the editors, there is not a single book on the market that is solely dedicated to the subject of performance evaluation and benchmarking of intelligent systems. Even books that address this topic do so only marginally or are out of date. The research work presented in this volume fills this void by drawing from the experiences and insights of experts gained both through theoretical development and practical implementation of intelligent systems in a variety of diverse application domains. The book presents

  20. Benchmarking neuromorphic systems with Nengo

    PubMed Central

    Bekolay, Trevor; Stewart, Terrence C.; Eliasmith, Chris

    2015-01-01

    Nengo is a software package for designing and simulating large-scale neural models. Nengo is architected such that the same Nengo model can be simulated on any of several Nengo backends with few to no modifications. Backends translate a model to specific platforms, which include GPUs and neuromorphic hardware. Nengo also contains a large test suite that can be run with any backend and focuses primarily on functional performance. We propose that Nengo's large test suite can be used to benchmark neuromorphic hardware's functional performance and simulation speed in an efficient, unbiased, and future-proof manner. We implement four benchmark models and show that Nengo can collect metrics across five different backends that identify situations in which some backends perform more accurately or quickly. PMID:26539076

  1. New NAS Parallel Benchmarks Results

    NASA Technical Reports Server (NTRS)

    Yarrow, Maurice; Saphir, William; VanderWijngaart, Rob; Woo, Alex; Kutler, Paul (Technical Monitor)

    1997-01-01

    NPB2 (NAS (NASA Advanced Supercomputing) Parallel Benchmarks 2) is an implementation, based on Fortran and the MPI (message passing interface) message passing standard, of the original NAS Parallel Benchmark specifications. NPB2 programs are run with little or no tuning, in contrast to NPB vendor implementations, which are highly optimized for specific architectures. NPB2 results complement, rather than replace, NPB results. Because they have not been optimized by vendors, NPB2 implementations approximate the performance a typical user can expect for a portable parallel program on distributed memory parallel computers. Together these results provide an insightful comparison of the real-world performance of high-performance computers. New NPB2 features: New implementation (CG), new workstation class problem sizes, new serial sample versions, more performance statistics.

  2. Geothermal Heat Pump Benchmarking Report

    SciTech Connect

    1997-01-17

    A benchmarking study was conducted on behalf of the Department of Energy to determine the critical factors in successful utility geothermal heat pump programs. A Successful program is one that has achieved significant market penetration. Successfully marketing geothermal heat pumps has presented some major challenges to the utility industry. However, select utilities have developed programs that generate significant GHP sales. This benchmarking study concludes that there are three factors critical to the success of utility GHP marking programs: (1) Top management marketing commitment; (2) An understanding of the fundamentals of marketing and business development; and (3) An aggressive competitive posture. To generate significant GHP sales, competitive market forces must by used. However, because utilities have functioned only in a regulated arena, these companies and their leaders are unschooled in competitive business practices. Therefore, a lack of experience coupled with an intrinsically non-competitive culture yields an industry environment that impedes the generation of significant GHP sales in many, but not all, utilities.

  3. H.B. Robinson-2 pressure vessel benchmark

    SciTech Connect

    Remec, I.; Kam, F.B.K.

    1998-02-01

    The H. B. Robinson Unit 2 Pressure Vessel Benchmark (HBR-2 benchmark) is described and analyzed in this report. Analysis of the HBR-2 benchmark can be used as partial fulfillment of the requirements for the qualification of the methodology for calculating neutron fluence in pressure vessels, as required by the U.S. Nuclear Regulatory Commission Regulatory Guide DG-1053, Calculational and Dosimetry Methods for Determining Pressure Vessel Neutron Fluence. Section 1 of this report describes the HBR-2 benchmark and provides all the dimensions, material compositions, and neutron source data necessary for the analysis. The measured quantities, to be compared with the calculated values, are the specific activities at the end of fuel cycle 9. The characteristic feature of the HBR-2 benchmark is that it provides measurements on both sides of the pressure vessel: in the surveillance capsule attached to the thermal shield and in the reactor cavity. In section 2, the analysis of the HBR-2 benchmark is described. Calculations with the computer code DORT, based on the discrete-ordinates method, were performed with three multigroup libraries based on ENDF/B-VI: BUGLE-93, SAILOR-95 and BUGLE-96. The average ratio of the calculated-to-measured specific activities (C/M) for the six dosimeters in the surveillance capsule was 0.90 {+-} 0.04 for all three libraries. The average C/Ms for the cavity dosimeters (without neptunium dosimeter) were 0.89 {+-} 0.10, 0.91 {+-} 0.10, and 0.90 {+-} 0.09 for the BUGLE-93, SAILOR-95 and BUGLE-96 libraries, respectively. It is expected that the agreement of the calculations with the measurements, similar to the agreement obtained in this research, should typically be observed when the discrete-ordinates method and ENDF/B-VI libraries are used for the HBR-2 benchmark analysis.

  4. RASSP Benchmark 4 Technical Description.

    DTIC Science & Technology

    1998-01-09

    of both application and VHDL code . 5.3.4.1 Lines of Code . The lines of code for each application and VHDL source file shall be reported. This...Developer shall provide source files for the VHDL files used in defining the Virtual Prototype as well as in programming the FPGAs . Benchmark-4...programmable devices running application code writ- ten in a high-level source language such as C, except that more detailed models may be required to

  5. MPI Multicore Torus Communication Benchmark

    SciTech Connect

    Schulz, M.

    2008-02-05

    The MPI Multicore Torus Communications Benchmark (TorusTest) measues the aggegate bandwidth across all six links from/to any multicore node in a logical torus. It can run in wo modi: using a static or a random mapping of tasks to torus locations. The former can be used to achieve optimal mappings and aggregate bandwidths that can be achieved with varying node mappings.

  6. Restaurant Energy Use Benchmarking Guideline

    SciTech Connect

    Hedrick, R.; Smith, V.; Field, K.

    2011-07-01

    A significant operational challenge for food service operators is defining energy use benchmark metrics to compare against the performance of individual stores. Without metrics, multiunit operators and managers have difficulty identifying which stores in their portfolios require extra attention to bring their energy performance in line with expectations. This report presents a method whereby multiunit operators may use their own utility data to create suitable metrics for evaluating their operations.

  7. Stakeholders’ perspectives on access-to-medicines policy and research priorities in Latin America and the Caribbean: face-to-face and web-based interviews

    PubMed Central

    2014-01-01

    Background This study aims to rank policy concerns and policy-related research issues in order to identify policy and research gaps on access to medicines (ATM) in low- and middle-income countries in Latin America and the Caribbean (LAC), as perceived by policy makers, researchers, NGO and international organization representatives, as part of a global prioritization exercise. Methods Data collection, conducted between January and May 2011, involved face-to-face interviews in El Salvador, Colombia, Dominican Republic, and Suriname, and an e-mail survey with key-stakeholders. Respondents were asked to choose the five most relevant criteria for research prioritization and to score policy/research items according to the degree to which they represented current policies, desired policies, current research topics, and/or desired research topics. Mean scores and summary rankings were obtained. Linear regressions were performed to contrast rankings concerning current and desired policies (policy gaps), and current and desired research (research gaps). Results Relevance, feasibility, and research utilization were the top ranked criteria for prioritizing research. Technical capacity, research and development for new drugs, and responsiveness, were the main policy gaps. Quality assurance, staff technical capacity, price regulation, out-of-pocket payments, and cost containment policies, were the main research gaps. There was high level of coherence between current and desired policies: coefficients of determination (R2) varied from 0.46 (Health system structure; r = 0.68, P <0.01) to 0.86 (Sustainable financing; r = 0.93, P <0.01). There was also high coherence between current and desired research on Rational selection and use of medicines (r = 0.71, P <0.05, R2 = 0.51), Pricing/affordability (r = 0.82, P <0.01, R2 = 0.67), and Sustainable financing (r = 0.76, P <0.01, R2 = 0.58). Coherence was less for Health system structure (r = 0.61, P <0.01, R2 = 0.38). Conclusions This

  8. [Medical research travel 100 years ago: the 14th German Medical Study Trip to North America and Canada in the year 1912].

    PubMed

    Neid, T; Helm, J

    2012-12-01

    Already before the First World War the North American medicine had developed within less years so far that it had an excellent reputation and that famous scientists and medicines from Europe came in the country for extensive study trips and congressional visits. Exactly 100 years ago the delegation biggest till then of German doctors visited in the course of the 14th German Medical Study Trip the United States of America. The very amicable relation between the doctors of both nations made easier the scientific exchange during this study trip and allowed a deep insight into the medicine of the USA to the participants. Even though the German doctors were very impressed with the developement in the USA and reported partly in their native country in detail about that, it didn't succeed in keeping pace with the rapid developement of the USA into the leading research nation in the following decades.

  9. Benchmarking in Czech Higher Education: The Case of Schools of Economics

    ERIC Educational Resources Information Center

    Placek, Michal; Ochrana, František; Pucek, Milan

    2015-01-01

    This article describes the use of benchmarking in universities in the Czech Republic and academics' experiences with it. It is based on research conducted among academics from economics schools in Czech public and private universities. The results identified several issues regarding the utilisation and understanding of benchmarking in the Czech…

  10. Benchmark Assessments as Predictors of Success on End-of-Course Standardized Tests in Algebra 1

    ERIC Educational Resources Information Center

    Thompson, Stephen A.

    2016-01-01

    This research examined the extent to which benchmark assessments can be accurate forecasters of student performance on a state assessment in Algebra 1. The study applied correlational analysis, regression analysis, and receiver operator characteristic (ROC) analysis to the benchmark and state assessment results of a rural school district in…

  11. How Benchmarking Can Help Us Improve What We Do

    ERIC Educational Resources Information Center

    Laufraben, Jodi Levine

    2004-01-01

    Assessment in higher education is no longer the purview of a few campus research professionals, nor is it just what happens at the end of a course or program. Institutions are in fact now looking to assess many of their processes and procedures at nearly every step, and for that purpose some are turning to an approach known as "benchmarking." A…

  12. Policy Analysis of the English Graduation Benchmark in Taiwan

    ERIC Educational Resources Information Center

    Shih, Chih-Min

    2012-01-01

    To nudge students to study English and to improve their English proficiency, many universities in Taiwan have imposed an English graduation benchmark on their students. This article reviews this policy, using the theoretic framework for education policy analysis proposed by Haddad and Demsky (1995). The author presents relevant research findings,…

  13. An introduction to benchmarking in healthcare.

    PubMed

    Benson, H R

    1994-01-01

    Benchmarking--the process of establishing a standard of excellence and comparing a business function or activity, a product, or an enterprise as a whole with that standard--will be used increasingly by healthcare institutions to reduce expenses and simultaneously improve product and service quality. As a component of total quality management, benchmarking is a continuous process by which an organization can measure and compare its own processes with those of organizations that are leaders in a particular area. Benchmarking should be viewed as a part of quality management programs, not as a replacement. There are four kinds of benchmarking: internal, competitive, functional and generic. With internal benchmarking, functions within an organization are compared with each other. Competitive benchmarking partners do business in the same market and provide a direct comparison of products or services. Functional and generic benchmarking are performed with organizations which may have a specific similar function, such as payroll or purchasing, but which otherwise are in a different business. Benchmarking must be a team process because the outcome will involve changing current practices, with effects felt throughout the organization. The team should include members who have subject knowledge; communications and computer proficiency; skills as facilitators and outside contacts; and sponsorship of senior management. Benchmarking requires quantitative measurement of the subject. The process or activity that you are attempting to benchmark will determine the types of measurements used. Benchmarking metrics usually can be classified in one of four categories: productivity, quality, time and cost-related.

  14. An automated protocol for performance benchmarking a widefield fluorescence microscope.

    PubMed

    Halter, Michael; Bier, Elianna; DeRose, Paul C; Cooksey, Gregory A; Choquette, Steven J; Plant, Anne L; Elliott, John T

    2014-11-01

    Widefield fluorescence microscopy is a highly used tool for visually assessing biological samples and for quantifying cell responses. Despite its widespread use in high content analysis and other imaging applications, few published methods exist for evaluating and benchmarking the analytical performance of a microscope. Easy-to-use benchmarking methods would facilitate the use of fluorescence imaging as a quantitative analytical tool in research applications, and would aid the determination of instrumental method validation for commercial product development applications. We describe and evaluate an automated method to characterize a fluorescence imaging system's performance by benchmarking the detection threshold, saturation, and linear dynamic range to a reference material. The benchmarking procedure is demonstrated using two different materials as the reference material, uranyl-ion-doped glass and Schott 475 GG filter glass. Both are suitable candidate reference materials that are homogeneously fluorescent and highly photostable, and the Schott 475 GG filter glass is currently commercially available. In addition to benchmarking the analytical performance, we also demonstrate that the reference materials provide for accurate day to day intensity calibration. Published 2014 Wiley Periodicals Inc.

  15. Benchmarking database performance for genomic data.

    PubMed

    Khushi, Matloob

    2015-06-01

    Genomic regions represent features such as gene annotations, transcription factor binding sites and epigenetic modifications. Performing various genomic operations such as identifying overlapping/non-overlapping regions or nearest gene annotations are common research needs. The data can be saved in a database system for easy management, however, there is no comprehensive database built-in algorithm at present to identify overlapping regions. Therefore I have developed a novel region-mapping (RegMap) SQL-based algorithm to perform genomic operations and have benchmarked the performance of different databases. Benchmarking identified that PostgreSQL extracts overlapping regions much faster than MySQL. Insertion and data uploads in PostgreSQL were also better, although general searching capability of both databases was almost equivalent. In addition, using the algorithm pair-wise, overlaps of >1000 datasets of transcription factor binding sites and histone marks, collected from previous publications, were reported and it was found that HNF4G significantly co-locates with cohesin subunit STAG1 (SA1).Inc.

  16. America Saves! Energizing Main Street's Small Businesses

    SciTech Connect

    Lindberg, James

    2016-09-30

    The America Saves! Energizing Main Street Small Businesses project engaged the 1,200-member National Main Street Center (NMSC) network of downtown organizations and other local, regional, and national partners to test a methodology for sharing customized energy efficiency information with owners of commercial buildings smaller than 50,000 square feet. Led by the National Trust for Historic Preservation’s Preservation Green Lab, the project marshalled local staff and volunteers to gather voluntarily-disclosed energy use information from participating businesses. This information was analyzed using a remote auditing tool (validated by the National Renewable Energy Lab) to assess energy savings opportunities and design retrofit strategies targeting seven building types (food service and sales, attached mixed-use, strip mall, retail, office, lodging, and schools). The original project design contemplated extensive leveraging of the Green Button protocol for sharing annualized utility data at a district scale. Due the lack of adoption of Green Button, the project partners developed customized approaches to data collection in each of twelve pilot communities. The project team encountered considerable challenges in gathering standardized annual utility data from local partners. After overcoming these issues, the data was uploaded to a data storehouse. Over 450 properties were benchmarked and the remote auditing tool was tested using full building profiles and utility records for more than 100 commercial properties in three of the pilot communities. The audit tool demonstrated potential for quickly capturing, analyzing, and communicating energy efficiency opportunities in small commercial buildings. However, the project team found that the unique physical characteristics and use patterns (partial vacancy, periodic intensive uses) of small commercial buildings required more trouble-shooting and data correction than was anticipated. In addition, the project revealed that

  17. Asthma in Latin America

    PubMed Central

    Forno, Erick; Gogna, Mudita; Cepeda, Alfonso; Yañez, Anahi; Solé, Dirceu; Cooper, Philip; Avila, Lydiana; Soto-Quiros, Manuel; Castro-Rodriguez, Jose A.; Celedón, Juan C.

    2015-01-01

    Consistent with the diversity of Latin America, there is profound variability in asthma burden among and within countries in this region. Regional variation in asthma prevalence is likely multifactorial and due to genetics, perinatal exposures, diet, obesity, tobacco use, indoor and outdoor pollutants, psychosocial stress, and microbial or parasitic infections. Similarly, nonuniform progress in asthma management leads to regional variability in disease morbidity. Future studies of distinct asthma phenotypes should follow up well-characterized Latin American subgroups and examine risk factors that are unique or common in Latin America (e.g. stress and violence, parasitic infections and use of biomass fuels for cooking). Because most Latin American countries share the same barriers to asthma management, concerted and multifaceted public health and research efforts are needed, including approaches to curtail tobacco use, campaigns to improve asthma treatment, broadening access to care and clinical trials of non-pharmacologic interventions (e.g. replacing biomass fuels with gas or electric stoves). PMID:26103996

  18. Lead exposure in Latin America and the Caribbean. Lead Research Group of the Pan-American Health Organization.

    PubMed Central

    Romieu, I; Lacasana, M; McConnell, R

    1997-01-01

    As a result of the rapid industrialization of Latin America and the Caribbean during the second half of this century, exposure to lead has become an increasingly important problem. To obtain an estimate of the magnitude of lead exposure in the region, we carried out a survey and a literature search on potential sources of lead exposure and on blood lead concentrations. Sixteen out of 18 Latin American and 2 out of 10 Caribbean countries responded to the survey. Lead in gasoline remains a major problem, although the lead content has decreased in many countries in the last few years. The impact of leaded fuel is more important in urban settings, given their high vehicular density. Seventy-five percent of the population of the region lives in urban areas, and children younger than 15 years of age, the most susceptible group, comprise 30% of the population. Other sources of lead exposure identified in the region included industrial emissions, battery recycling, paint and varnishes, and contaminated food and water. Lead is recognized as a priority problem by national authorities in 72% of the countries that responded to the survey, and in 50% of the countries some legislation exists to regulate the lead content in certain products. However, compliance is low. There is an urgent need for a broad-based coalition between policy makers, industry, workers, unions, health care providers, and the community to take actions to reduce environmental and occupational lead exposures in all the Latin American and Caribbean countries. Images Figure 1. Figure 2. PMID:9189704

  19. Gaia FGK benchmark stars: Metallicity

    NASA Astrophysics Data System (ADS)

    Jofré, P.; Heiter, U.; Soubiran, C.; Blanco-Cuaresma, S.; Worley, C. C.; Pancino, E.; Cantat-Gaudin, T.; Magrini, L.; Bergemann, M.; González Hernández, J. I.; Hill, V.; Lardo, C.; de Laverny, P.; Lind, K.; Masseron, T.; Montes, D.; Mucciarelli, A.; Nordlander, T.; Recio Blanco, A.; Sobeck, J.; Sordo, R.; Sousa, S. G.; Tabernero, H.; Vallenari, A.; Van Eck, S.

    2014-04-01

    Context. To calibrate automatic pipelines that determine atmospheric parameters of stars, one needs a sample of stars, or "benchmark stars", with well-defined parameters to be used as a reference. Aims: We provide detailed documentation of the iron abundance determination of the 34 FGK-type benchmark stars that are selected to be the pillars for calibration of the one billion Gaia stars. They cover a wide range of temperatures, surface gravities, and metallicities. Methods: Up to seven different methods were used to analyze an observed spectral library of high resolutions and high signal-to-noise ratios. The metallicity was determined by assuming a value of effective temperature and surface gravity obtained from fundamental relations; that is, these parameters were known a priori and independently from the spectra. Results: We present a set of metallicity values obtained in a homogeneous way for our sample of benchmark stars. In addition to this value, we provide detailed documentation of the associated uncertainties. Finally, we report a value of the metallicity of the cool giant ψ Phe for the first time. Based on NARVAL and HARPS data obtained within the Gaia DPAC (Data Processing and Analysis Consortium) and coordinated by the GBOG (Ground-Based Observations for Gaia) working group and on data retrieved from the ESO-ADP database.Tables 6-76 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/564/A133

  20. Building Energy Benchmarking in India: an Action Plan for Advancing the State-of-the-Art

    SciTech Connect

    Sarraf, Saket; Anand, Shilpi; Shukla, Yash; Mathew, Paul; Singh, Reshma

    2014-06-01

    This document describes an action plan for advancing the state of the art of commercial building energy benchmarking in the Indian context. The document is primarily intended for two audiences: (a) Research and development (R&D) sponsors and researchers can use the action plan to frame, plan, prioritize and scope new energy benchmarking R&D in order to ensure that their research is market relevant; (b) Policy makers and program implementers engaged in the deployment of benchmarking and building efficiency rating programmes can use the action plan for policy formulation and enforcement .

  1. NASA Software Engineering Benchmarking Effort

    NASA Technical Reports Server (NTRS)

    Godfrey, Sally; Rarick, Heather

    2012-01-01

    Benchmarking was very interesting and provided a wealth of information (1) We did see potential solutions to some of our "top 10" issues (2) We have an assessment of where NASA stands with relation to other aerospace/defense groups We formed new contacts and potential collaborations (1) Several organizations sent us examples of their templates, processes (2) Many of the organizations were interested in future collaboration: sharing of training, metrics, Capability Maturity Model Integration (CMMI) appraisers, instructors, etc. We received feedback from some of our contractors/ partners (1) Desires to participate in our training; provide feedback on procedures (2) Welcomed opportunity to provide feedback on working with NASA

  2. Bayer Facts of Science Education XV: A View from the Gatekeepers--STEM Department Chairs at America's Top 200 Research Universities on Female and Underrepresented Minority Undergraduate STEM Students

    ERIC Educational Resources Information Center

    Journal of Science Education and Technology, 2012

    2012-01-01

    Diversity and the underrepresentation of women, African-Americans, Hispanics and American Indians in the nation's science, technology, engineering and mathematics (STEM) fields are the subjects of the XV: A View from the Gatekeepers--STEM Department Chairs at America's Top 200 Research Universities on Female and Underrepresented Minority…

  3. The Waning of America's Higher Education Advantage: International Competitors Are No Longer Number Two and Have Big Plans in the Global Economy. Research & Occasional Paper Series: CSHE.9.06

    ERIC Educational Resources Information Center

    Douglass, John Aubrey

    2006-01-01

    The United States has long enjoyed being on the cutting edge in its devotion to building a vibrant higher education sector. After a century of leading the world in participation rates in higher education, however, there are strong indications that America's advantage is waning. The academic research enterprise remains relatively vibrant. However,…

  4. NASA Indexing Benchmarks: Evaluating Text Search Engines

    NASA Technical Reports Server (NTRS)

    Esler, Sandra L.; Nelson, Michael L.

    1997-01-01

    The current proliferation of on-line information resources underscores the requirement for the ability to index collections of information and search and retrieve them in a convenient manner. This study develops criteria for analytically comparing the index and search engines and presents results for a number of freely available search engines. A product of this research is a toolkit capable of automatically indexing, searching, and extracting performance statistics from each of the focused search engines. This toolkit is highly configurable and has the ability to run these benchmark tests against other engines as well. Results demonstrate that the tested search engines can be grouped into two levels. Level one engines are efficient on small to medium sized data collections, but show weaknesses when used for collections 100MB or larger. Level two search engines are recommended for data collections up to and beyond 100MB.

  5. Gatemon Benchmarking and Two-Qubit Operation

    NASA Astrophysics Data System (ADS)

    Casparis, Lucas; Larsen, Thorvald; Olsen, Michael; Petersson, Karl; Kuemmeth, Ferdinand; Krogstrup, Peter; Nygard, Jesper; Marcus, Charles

    Recent experiments have demonstrated superconducting transmon qubits with semiconductor nanowire Josephson junctions. These hybrid gatemon qubits utilize field effect tunability singular to semiconductors to allow complete qubit control using gate voltages, potentially a technological advantage over conventional flux-controlled transmons. Here, we present experiments with a two-qubit gatemon circuit. We characterize qubit coherence and stability and use randomized benchmarking to demonstrate single-qubit gate errors of ~0.5 % for all gates, including voltage-controlled Z rotations. We show coherent capacitive coupling between two gatemons and coherent SWAP operations. Finally, we perform a two-qubit controlled-phase gate with an estimated fidelity of ~91 %, demonstrating the potential of gatemon qubits for building scalable quantum processors. We acknowledge financial support from Microsoft Project Q and the Danish National Research Foundation.

  6. Professionalization of Teaching in America: Two Case Studies Using Educational Research Experiences to Explore the Perceptions of Preservice Teachers/Researchers

    ERIC Educational Resources Information Center

    Gentry, James E.; Baker, Credence; Lamb, Holly; Pate, Roberta

    2016-01-01

    In 2013-2015, two faculty-led educational research studies were conducted, aided by five undergraduate preservice teachers/researchers (PSTR). Faculty-researchers designed a qualitative phenomenological-inquiry based methodology to examine the PSTR perceptions regarding their respective research experiences with faculty. Triangulation of the data…

  7. Benchmarking pathology services: implementing a longitudinal study.

    PubMed

    Gordon, M; Holmes, S; McGrath, K; Neil, A

    1999-05-01

    This paper details the benchmarking process and its application to the activities of pathology laboratories participating in a benchmark pilot study [the Royal College of Pathologists of Australasian (RCPA) Benchmarking Project]. The discussion highlights the primary issues confronted in collecting, processing, analysing and comparing benchmark data. The paper outlines the benefits of engaging in a benchmarking exercise and provides a framework which can be applied across a range of public health settings. This information is then applied to a review of the development of the RCPA Benchmarking Project. Consideration is also given to the nature of the preliminary results of the project and the implications of these results to the on-going conduct of the study.

  8. Pynamic: the Python Dynamic Benchmark

    SciTech Connect

    Lee, G L; Ahn, D H; de Supinksi, B R; Gyllenhaal, J C; Miller, P J

    2007-07-10

    Python is widely used in scientific computing to facilitate application development and to support features such as computational steering. Making full use of some of Python's popular features, which improve programmer productivity, leads to applications that access extremely high numbers of dynamically linked libraries (DLLs). As a result, some important Python-based applications severely stress a system's dynamic linking and loading capabilities and also cause significant difficulties for most development environment tools, such as debuggers. Furthermore, using the Python paradigm for large scale MPI-based applications can create significant file IO and further stress tools and operating systems. In this paper, we present Pynamic, the first benchmark program to support configurable emulation of a wide-range of the DLL usage of Python-based applications for large scale systems. Pynamic has already accurately reproduced system software and tool issues encountered by important large Python-based scientific applications on our supercomputers. Pynamic provided insight for our system software and tool vendors, and our application developers, into the impact of several design decisions. As we describe the Pynamic benchmark, we will highlight some of the issues discovered in our large scale system software and tools using Pynamic.

  9. BENCHMARK DOSE TECHNICAL GUIDANCE DOCUMENT ...

    EPA Pesticide Factsheets

    The U.S. EPA conducts risk assessments for an array of health effects that may result from exposure to environmental agents, and that require an analysis of the relationship between exposure and health-related outcomes. The dose-response assessment is essentially a two-step process, the first being the definition of a point of departure (POD), and the second extrapolation from the POD to low environmentally-relevant exposure levels. The benchmark dose (BMD) approach provides a more quantitative alternative to the first step in the dose-response assessment than the current NOAEL/LOAEL process for noncancer health effects, and is similar to that for determining the POD proposed for cancer endpoints. As the Agency moves toward harmonization of approaches for human health risk assessment, the dichotomy between cancer and noncancer health effects is being replaced by consideration of mode of action and whether the effects of concern are likely to be linear or nonlinear at low doses. Thus, the purpose of this project is to provide guidance for the Agency and the outside community on the application of the BMD approach in determining the POD for all types of health effects data, whether a linear or nonlinear low dose extrapolation is used. A guidance document is being developed under the auspices of EPA's Risk Assessment Forum. The purpose of this project is to provide guidance for the Agency and the outside community on the application of the benchmark dose (BMD) appr

  10. NAS Parallel Benchmark Results 11-96. 1.0

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Bailey, David; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    The NAS Parallel Benchmarks have been developed at NASA Ames Research Center to study the performance of parallel supercomputers. The eight benchmark problems are specified in a "pencil and paper" fashion. In other words, the complete details of the problem to be solved are given in a technical document, and except for a few restrictions, benchmarkers are free to select the language constructs and implementation techniques best suited for a particular system. These results represent the best results that have been reported to us by the vendors for the specific 3 systems listed. In this report, we present new NPB (Version 1.0) performance results for the following systems: DEC Alpha Server 8400 5/440, Fujitsu VPP Series (VX, VPP300, and VPP700), HP/Convex Exemplar SPP2000, IBM RS/6000 SP P2SC node (120 MHz), NEC SX-4/32, SGI/CRAY T3E, SGI Origin200, and SGI Origin2000. We also report High Performance Fortran (HPF) based NPB results for IBM SP2 Wide Nodes, HP/Convex Exemplar SPP2000, and SGI/CRAY T3D. These results have been submitted by Applied Parallel Research (APR) and Portland Group Inc. (PGI). We also present sustained performance per dollar for Class B LU, SP and BT benchmarks.

  11. The social control of behavior control: behavior modification, Individual Rights, and research ethics in America, 1971-1979.

    PubMed

    Rutherford, Alexandra

    2006-01-01

    In 1971, the U.S. Senate Subcommittee on Constitutional Rights began a three-year study to investigate the federal funding of all research involving behavior modification. During this period, operant programs of behavior change, particularly those implemented in closed institutions, were subjected to specific scrutiny. In this article, I outline a number of scientific and social factors that led to this investigation and discuss the study itself. I show how behavioral scientists, both individually and through their professional organizations, responded to this public scrutiny by (1) self-consciously altering their terminology and techniques; (2) considering the need to more effectively police their professional turf; and (3) confronting issues of ethics and values in their work. Finally, I link this episode to the formation of the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, whose recommendations resulted in changes to the ethical regulation of federally funded human subjects research that persist to the present day.

  12. Benchmarking for Excellence and the Nursing Process

    NASA Technical Reports Server (NTRS)

    Sleboda, Claire

    1999-01-01

    Nursing is a service profession. The services provided are essential to life and welfare. Therefore, setting the benchmark for high quality care is fundamental. Exploring the definition of a benchmark value will help to determine a best practice approach. A benchmark is the descriptive statement of a desired level of performance against which quality can be judged. It must be sufficiently well understood by managers and personnel in order that it may serve as a standard against which to measure value.

  13. Computational Chemistry Comparison and Benchmark Database

    National Institute of Standards and Technology Data Gateway

    SRD 101 NIST Computational Chemistry Comparison and Benchmark Database (Web, free access)   The NIST Computational Chemistry Comparison and Benchmark Database is a collection of experimental and ab initio thermochemical properties for a selected set of molecules. The goals are to provide a benchmark set of molecules for the evaluation of ab initio computational methods and allow the comparison between different ab initio computational methods for the prediction of thermochemical properties.

  14. Building America Top Innovations 2012: Unvented, Conditioned Crawlspaces

    SciTech Connect

    none,

    2013-01-01

    This Building America Top Innovations profile describes Building America research which influenced code requirements by demonstrating that unvented, conditioned crawlspaces use 15% to 18% less energy for heating and cooling while reducing humidity over 20% in humid climates.

  15. Method and system for benchmarking computers

    DOEpatents

    Gustafson, John L.

    1993-09-14

    A testing system and method for benchmarking computer systems. The system includes a store containing a scalable set of tasks to be performed to produce a solution in ever-increasing degrees of resolution as a larger number of the tasks are performed. A timing and control module allots to each computer a fixed benchmarking interval in which to perform the stored tasks. Means are provided for determining, after completion of the benchmarking interval, the degree of progress through the scalable set of tasks and for producing a benchmarking rating relating to the degree of progress for each computer.

  16. Summaries of Research and Development Activities in Agricultural Education, 1981-1982, in the United States of America.

    ERIC Educational Resources Information Center

    Kotrlik, Joe W., Comp.

    This compilation, the seventh in an annual series, includes abstracts of 155 studies in agricultural education completed during the period July 1, 1981, to June 30, 1982. Twenty-five of the completed studies represent staff research, 84 represent master's studies or theses, and 46 are doctoral dissertations. Also included is a listing of the 175…

  17. Child Maltreatment and Its Relationship to Drug Use in Latin America and the Caribbean: An Overview and Multinational Research Partnership

    ERIC Educational Resources Information Center

    Longman-Mills, Samantha; Gonzalez, Yolanda W.; Melendez, Marlon O.; Garcia, Monica R.; Gomez, Juan D.; Juarez, Cristina G.; Martinez, Eduardo A.; Penalba, Sobeyda J.; Pizzanelli, Miguel E.; Solorzano, Lucia I.; Wright, Gloria; Cumsille, Francisco; Sapag, Jaime; Wekerle, Christine; Hamilton, Hayley; Erickson, Patricia; Mann, Robert

    2011-01-01

    Child maltreatment and substance abuse are both international public health priorities. Research shows that child maltreatment increases the risk for substance use and problems. Thus, recognition of this relationship may have important implications for substance demand reduction strategies, including efforts to prevent and treat substance use and…

  18. A review of estuarine fish research in South America: what has been achieved and what is the future for sustainability and conservation?

    PubMed

    Blaber, S J M; Barletta, M

    2016-07-01

    Estuarine fish research in South America began in the early 20th Century, but it is only within the last 40 years that detailed studies have been undertaken. This review firstly summarizes research results from South American estuaries by geographic area, starting with the temperate south-east, then the temperate-sub-tropical transition zone in Brazil, then the semi-arid and tropical estuaries of north and north-east Brazil including the Amazon complex, then the north and Caribbean coasts and finally down the Pacific coast of the continent. They include almost all types of estuarine systems, from large open systems (e.g. the temperate Rio de La Plata and tropical Amazon) to extensive coastal lakes (e.g. the temperate Patos Lagoon and tropical Cienega Grande de Santa Marta). They encompass a broad range of climatic and vegetation types, from saltmarsh systems in the south-east and fjords in the south-west to both arid and humid tropical systems, dominated by mangroves in the north. Their tidal regimes range from microtidal (e.g. Mar Chiquita, Argentina) through mesotidal (e.g. Goiana, Brazil) to macrotidal in the Amazon complex where they can exceed 7 m. The review uses where possible the recent standardization of estuarine fish categories and guilds, but the ways that fishes use tropical South American systems may necessitate further refinements of the categories and guilds, particularly in relation to freshwater fishes, notably the Siluriformes, which dominate many north and north-east South American systems. The extent to which South American studies contribute to discussions and paradigms of connectivity and estuarine dependence is summarized, but work on these topics has only just begun. The anthropogenic issue of pollution, particularly in relation to heavy metals and fishes and fisheries in estuaries is more advanced, but the possible effects of climate change have barely been addressed. Studies around conservation and management are briefly reviewed and

  19. Solar Technology Validation Project - RES Americas: Cooperative Research and Development Final Report, CRADA Number CRD-09-367-11

    SciTech Connect

    Wilcox, S.

    2013-08-01

    Under this Agreement, NREL will work with Participant to improve concentrating solar power system performance characterizations. This work includes, but is not limited to, research and development of methods for acquiring renewable resource characterization information using site-specific measurements of solar radiation and meteorological conditions; collecting system performance data; and developing tools for improving the design, installation, operation, and maintenance of solar energy conversion systems. This work will be conducted at NREL and Participant facilities.

  20. COG validation: SINBAD Benchmark Problems

    SciTech Connect

    Lent, E M; Sale, K E; Buck, R M; Descalle, M

    2004-02-23

    We validated COG, a 3D Monte Carlo radiation transport code, against experimental data and MNCP4C simulations from the Shielding Integral Benchmark Archive Database (SINBAD) compiled by RSICC. We modeled three experiments: the Osaka Nickel and Aluminum sphere experiments conducted at the OKTAVIAN facility, and the liquid oxygen experiment conducted at the FNS facility. COG results are in good agreement with experimental data and generally within a few % of MCNP results. There are several possible sources of discrepancy between MCNP and COG results: (1) the cross-section database versions are different, MCNP uses ENDFB VI 1.1 while COG uses ENDFB VIR7, (2) the code implementations are different, and (3) the models may differ slightly. We also limited the use of variance reduction methods when running the COG version of the problems.

  1. Reconceptualizing Benchmarks for Residency Training

    PubMed Central

    2017-01-01

    Postgraduate medical education (PGME) is currently transitioning to a competency-based framework. This model clarifies the desired outcome of residency training - competence. However, since the popularization of Ericsson's work on the effect of time and deliberate practice on performance level, his findings have been applied in some areas of residency training. Though this may be grounded in a noble effort to maximize patient well-being, it imposes unrealistic expectations on trainees. This work aims to demonstrate the fundamental flaws of this application and therefore the lack of validity in using Ericsson's work to develop training benchmarks at the postgraduate level as well as expose potential harms in doing so.

  2. Benchmarking Multipacting Simulations in VORPAL

    SciTech Connect

    C. Nieter, C. Roark, P. Stoltz, K. Tian

    2009-05-01

    We will present the results of benchmarking simulations run to test the ability of VORPAL to model multipacting processes in Superconducting Radio Frequency structures. VORPAL is an electromagnetic (FDTD) particle-in-cell simulation code originally developed for applications in plasma and beam physics. The addition of conformal boundaries and algorithms for secondary electron emission allow VORPAL to be applied to multipacting processes. We start with simulations of multipacting between parallel plates where there are well understood theoretical predictions for the frequency bands where multipacting is expected to occur. We reproduce the predicted multipacting bands and demonstrate departures from the theoretical predictions when a more sophisticated model of secondary emission is used. Simulations of existing cavity structures developed at Jefferson National Laboratories will also be presented where we compare results from VORPAL to experimental data.

  3. Benchmarking ICRF simulations for ITER

    SciTech Connect

    R. V. Budny, L. Berry, R. Bilato, P. Bonoli, M. Brambilla, R.J. Dumont, A. Fukuyama, R. Harvey, E.F. Jaeger, E. Lerche, C.K. Phillips, V. Vdovin, J. Wright, and members of the ITPA-IOS

    2010-09-28

    Abstract Benchmarking of full-wave solvers for ICRF simulations is performed using plasma profiles and equilibria obtained from integrated self-consistent modeling predictions of four ITER plasmas. One is for a high performance baseline (5.3 T, 15 MA) DT H-mode plasma. The others are for half-field, half-current plasmas of interest for the pre-activation phase with bulk plasma ion species being either hydrogen or He4. The predicted profiles are used by seven groups to predict the ICRF electromagnetic fields and heating profiles. Approximate agreement is achieved for the predicted heating power partitions for the DT and He4 cases. Profiles of the heating powers and electromagnetic fields are compared.

  4. Benchmarking Asteroid-Deflection Experiment

    NASA Astrophysics Data System (ADS)

    Remington, Tane; Bruck Syal, Megan; Owen, John Michael; Miller, Paul L.

    2016-10-01

    An asteroid impacting Earth could have devastating consequences. In preparation to deflect or disrupt one before it reaches Earth, it is imperative to have modeling capabilities that adequately simulate the deflection actions. Code validation is key to ensuring full confidence in simulation results used in an asteroid-mitigation plan. We are benchmarking well-known impact experiments using Spheral, an adaptive smoothed-particle hydrodynamics code, to validate our modeling of asteroid deflection. We describe our simulation results, compare them with experimental data, and discuss what we have learned from our work. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-695540

  5. "POLAR-PALOOZA" and "International POLAR-PALOOZA": Taking Researchers on the Road to Engage Public Audiences across America, and Around the World

    NASA Astrophysics Data System (ADS)

    Haines-Stiles, G.; Akuginow, E.

    2010-12-01

    POLAR-PALOOZA and its companion project, "International POLAR-PALOOZA" shared the same central premise: that polar researchers, speaking for themselves, could be powerful communicators about the science and mission of the 4th International Polar Year, and could successfully engage a wide variety of public audiences across America and around the world. Supported for the US tour by NSF and NASA, and internationally by NSF alone, the project enlisted more than forty American researchers, and 14 polar scientists from Brazil, China and Australia, to participate in events at science centers and natural history museums, universities, public libraries and schools, and also for targeted outreach to special audiences such as young female researchers in Oklahoma, or the Downtown Rotary in San Diego. Evaluations by two different ISE groups found similar results domestically and internationally. When supported by HD video clips and presenting informally in teams of 3, 4, 5 and sometimes even 6 researchers as part of a fast-paced "show," the scientists themselves were almost always rated as among the most important aspects of the program. Significant understandings about polar science and global climate change resulted, along with a positive impression of the research undertaken during IPY. This presentation at Fall AGU 2010 will present results from the Summative Evaluation of both projects, show representative video clips of the public presentations, share photographs of some of the most dramatically varied venues and candid behind-the-scenes action, and share "Lessons Learned" that can be broadly applied to the dissemination of Earth and space science research. These include: collaboration with partner institutions is never easy. (Duh.) Authentic props (such as ice cores, when not trashed by TSA) make a powerful impression on audiences, and give reality to remote places and complex science. And, most importantly, that since 85% of Americans have never met a scientist, that

  6. Assessing and Synthesizing the Last Decade of Research on the Major Pools and Fluxes of the Carbon Cycle in the US and North America: An Interagency Governmental Perspective

    NASA Astrophysics Data System (ADS)

    Cavallaro, N.; Shrestha, G.; Stover, D. B.; Zhu, Z.; Ombres, E. H.; Deangelo, B.

    2015-12-01

    The 2nd State of the Carbon Cycle Report (SOCCR-2) is focused on US and North American carbon stocks and fluxes in managed and unmanaged systems, including relevant carbon management science perspectives and tools for supporting and informing decisions. SOCCR-2 is inspired by the US Carbon Cycle Science Plan (2011) which emphasizes global scale research on long-lived, carbon-based greenhouse gases, carbon dioxide and methane, and the major pools and fluxes of the global carbon cycle. Accordingly, the questions framing the Plan inform this report's topical roadmap, with a focus on US and North America in the global context: 1) How have natural processes and human actions affected the global carbon cycle on land, in the atmosphere, in the oceans and in the ecosystem interfaces (e.g. coastal, wetlands, urban-rural)? 2) How have socio-economic trends affected the levels of the primary carbon-containing gases, carbon dioxide and methane, in the atmosphere? 3) How have species, ecosystems, natural resources and human systems been impacted by increasing greenhouse gas concentrations, the associated changes in climate, and by carbon management decisions and practices? To address these aspects, SOCCR-2 will encompass the following broad assessment framework: 1) Carbon Cycle at Scales (Global Perspective, North American Perspective, US Perspective, Regional Perspective); 2) Role of carbon in systems (Soils; Water, Oceans, Vegetation; Terrestrial-aquatic Interfaces); 3) Interactions/Disturbance/Impacts from/on the carbon cycle. 4) Carbon Management Science Perspective and Decision Support (measurements, observations and monitoring for research and policy relevant decision-support etc.). In this presentation, the Carbon Cycle Interagency Working Group and the U.S. Global Change Research Program's U.S. Carbon Cycle Science Program Office will highlight the scientific context, strategy, structure, team and production process of the report, which is part of the USGCRP's Sustained

  7. Eco-bio-social research on community-based approaches for Chagas disease vector control in Latin America

    PubMed Central

    Gürtler, Ricardo E.; Yadon, Zaida E.

    2015-01-01

    This article provides an overview of three research projects which designed and implemented innovative interventions for Chagas disease vector control in Bolivia, Guatemala and Mexico. The research initiative was based on sound principles of community-based ecosystem management (ecohealth), integrated vector management, and interdisciplinary analysis. The initial situational analysis achieved a better understanding of ecological, biological and social determinants of domestic infestation. The key factors identified included: housing quality; type of peridomestic habitats; presence and abundance of domestic dogs, chickens and synanthropic rodents; proximity to public lights; location in the periphery of the village. In Bolivia, plastering of mud walls with appropriate local materials and regular cleaning of beds and of clothes next to the walls, substantially decreased domestic infestation and abundance of the insect vector Triatoma infestans. The Guatemalan project revealed close links between house infestation by rodents and Triatoma dimidiata, and vector infection with Trypanosoma cruzi. A novel community-operated rodent control program significantly reduced rodent infestation and bug infection. In Mexico, large-scale implementation of window screens translated into promising reductions in domestic infestation. A multi-pronged approach including community mobilisation and empowerment, intersectoral cooperation and adhesion to integrated vector management principles may be the key to sustainable vector and disease control in the affected regions. PMID:25604759

  8. Eco-bio-social research on community-based approaches for Chagas disease vector control in Latin America.

    PubMed

    Gürtler, Ricardo E; Yadon, Zaida E

    2015-02-01

    This article provides an overview of three research projects which designed and implemented innovative interventions for Chagas disease vector control in Bolivia, Guatemala and Mexico. The research initiative was based on sound principles of community-based ecosystem management (ecohealth), integrated vector management, and interdisciplinary analysis. The initial situational analysis achieved a better understanding of ecological, biological and social determinants of domestic infestation. The key factors identified included: housing quality; type of peridomestic habitats; presence and abundance of domestic dogs, chickens and synanthropic rodents; proximity to public lights; location in the periphery of the village. In Bolivia, plastering of mud walls with appropriate local materials and regular cleaning of beds and of clothes next to the walls, substantially decreased domestic infestation and abundance of the insect vector Triatoma infestans. The Guatemalan project revealed close links between house infestation by rodents and Triatoma dimidiata, and vector infection with Trypanosoma cruzi. A novel community-operated rodent control program significantly reduced rodent infestation and bug infection. In Mexico, large-scale implementation of window screens translated into promising reductions in domestic infestation. A multi-pronged approach including community mobilisation and empowerment, intersectoral cooperation and adhesion to integrated vector management principles may be the key to sustainable vector and disease control in the affected regions.

  9. A cross-sectional multicenter study of osteogenesis imperfecta in North America - results from the linked clinical research centers.

    PubMed

    Patel, R M; Nagamani, S C S; Cuthbertson, D; Campeau, P M; Krischer, J P; Shapiro, J R; Steiner, R D; Smith, P A; Bober, M B; Byers, P H; Pepin, M; Durigova, M; Glorieux, F H; Rauch, F; Lee, B H; Hart, T; Sutton, V R

    2015-02-01

    Osteogenesis imperfecta (OI) is the most common skeletal dysplasia that predisposes to recurrent fractures and bone deformities. In spite of significant advances in understanding the genetic basis of OI, there have been no large-scale natural history studies. To better understand the natural history and improve the care of patients, a network of Linked Clinical Research Centers (LCRC) was established. Subjects with OI were enrolled in a longitudinal study, and in this report, we present cross-sectional data on the largest cohort of OI subjects (n = 544). OI type III subjects had higher prevalence of dentinogenesis imperfecta, severe scoliosis, and long bone deformities as compared to those with OI types I and IV. Whereas the mean lumbar spine area bone mineral density (LS aBMD) was low across all OI subtypes, those with more severe forms had lower bone mass. Molecular testing may help predict the subtype in type I collagen-related OI. Analysis of such well-collected and unbiased data in OI can not only help answering questions that are relevant to patient care but also foster hypothesis-driven research, especially in the context of 'phenotypic expansion' driven by next-generation sequencing.

  10. Developments in Impact Assessment in North America

    EPA Science Inventory

    Beginning with a background of recent global developments in this area, this presentation will focus on how global research has impacted North America and how North America is providing additional developments to address the issues of the global economy. Recent developments inc...

  11. Sieve of Eratosthenes benchmarks for the Z8 FORTH microcontroller

    SciTech Connect

    Edwards, R.

    1989-02-01

    This report presents benchmarks for the Z8 FORTH microcontroller system that ORNL uses extensively in proving concepts and developing prototype test equipment for the Smart House Project. The results are based on the sieve of Eratosthenes algorithm, a calculation used extensively to rate computer systems and programming languages. Three benchmark refinements are presented,each showing how the execution speed of a FORTH program can be improved by use of a particular optimization technique. The last version of the FORTH benchmark shows that optimization is worth the effort: It executes 20 times faster than the Gilbreaths' widely-published FORTH benchmark program. The National Association of Home Builders Smart House Project is a cooperative research and development effort being undertaken by American home builders and a number of major corporations serving the home building industry. The major goal of the project is to help the participating organizations incorporate advanced technology in communications,energy distribution, and appliance control products for American homes. This information is provided to help project participants use the Z8 FORTH prototyping microcontroller in developing Smart House concepts and equipment. The discussion is technical in nature and assumes some experience with microcontroller devices and the techniques used to develop software for them. 7 refs., 5 tabs.

  12. Generating Shifting Workloads to Benchmark Adaptability in Relational Database Systems

    NASA Astrophysics Data System (ADS)

    Rabl, Tilmann; Lang, Andreas; Hackl, Thomas; Sick, Bernhard; Kosch, Harald

    A large body of research concerns the adaptability of database systems. Many commercial systems already contain autonomic processes that adapt configurations as well as data structures and data organization. Yet there is virtually no possibility for a just measurement of the quality of such optimizations. While standard benchmarks have been developed that simulate real-world database applications very precisely, none of them considers variations in workloads produced by human factors. Today’s benchmarks test the performance of database systems by measuring peak performance on homogeneous request streams. Nevertheless, in systems with user interaction access patterns are constantly shifting. We present a benchmark that simulates a web information system with interaction of large user groups. It is based on the analysis of a real online eLearning management system with 15,000 users. The benchmark considers the temporal dependency of user interaction. Main focus is to measure the adaptability of a database management system according to shifting workloads. We will give details on our design approach that uses sophisticated pattern analysis and data mining techniques.

  13. Benchmarking Diagnostic Algorithms on an Electrical Power System Testbed

    NASA Technical Reports Server (NTRS)

    Kurtoglu, Tolga; Narasimhan, Sriram; Poll, Scott; Garcia, David; Wright, Stephanie

    2009-01-01

    Diagnostic algorithms (DAs) are key to enabling automated health management. These algorithms are designed to detect and isolate anomalies of either a component or the whole system based on observations received from sensors. In recent years a wide range of algorithms, both model-based and data-driven, have been developed to increase autonomy and improve system reliability and affordability. However, the lack of support to perform systematic benchmarking of these algorithms continues to create barriers for effective development and deployment of diagnostic technologies. In this paper, we present our efforts to benchmark a set of DAs on a common platform using a framework that was developed to evaluate and compare various performance metrics for diagnostic technologies. The diagnosed system is an electrical power system, namely the Advanced Diagnostics and Prognostics Testbed (ADAPT) developed and located at the NASA Ames Research Center. The paper presents the fundamentals of the benchmarking framework, the ADAPT system, description of faults and data sets, the metrics used for evaluation, and an in-depth analysis of benchmarking results obtained from testing ten diagnostic algorithms on the ADAPT electrical power system testbed.

  14. National healthcare capital project benchmarking--an owner's perspective.

    PubMed

    Kahn, Noah

    2009-01-01

    Few sectors of the economy have been left unscathed in these economic times. Healthcare construction has been less affected than residential and nonresidential construction sectors, but driven by re-evaluation of healthcare system capital plans, projects are now being put on hold or canceled. The industry is searching for ways to improve the value proposition for project delivery and process controls. In other industries, benchmarking component costs has led to significant, sustainable reductions in costs and cost variations. Kaiser Permanente and the Construction Industry Institute (CII), a research component of the University of Texas at Austin, an industry leader in benchmarking, have joined with several other organizations to work on a national benchmarking and metrics program to gauge the performance of healthcare facility projects. This initiative will capture cost, schedule, delivery method, change, functional, operational, and best practice metrics. This program is the only one of its kind. The CII Web-based interactive reporting system enables a company to view its information and mine industry data. Benchmarking is a tool for continuous improvement that is capable not only of grading outcomes; it can inform all aspects of the healthcare design and construction process and ultimately help moderate the increasing cost of delivering healthcare.

  15. Benchmarking Learning and Teaching: Developing a Method

    ERIC Educational Resources Information Center

    Henderson-Smart, Cheryl; Winning, Tracey; Gerzina, Tania; King, Shalinie; Hyde, Sarah

    2006-01-01

    Purpose: To develop a method for benchmarking teaching and learning in response to an institutional need to validate a new program in Dentistry at the University of Sydney, Australia. Design/methodology/approach: After a collaborative partner, University of Adelaide, was identified, the areas of teaching and learning to be benchmarked, PBL…

  16. Beyond Benchmarking: Value-Adding Metrics

    ERIC Educational Resources Information Center

    Fitz-enz, Jac

    2007-01-01

    HR metrics has grown up a bit over the past two decades, moving away from simple benchmarking practices and toward a more inclusive approach to measuring institutional performance and progress. In this article, the acknowledged "father" of human capital performance benchmarking provides an overview of several aspects of today's HR metrics…

  17. Benchmarking can add up for healthcare accounting.

    PubMed

    Czarnecki, M T

    1994-09-01

    In 1993, a healthcare accounting and finance benchmarking survey of hospital and nonhospital organizations gathered statistics about key common performance areas. A low response did not allow for statistically significant findings, but the survey identified performance measures that can be used in healthcare financial management settings. This article explains the benchmarking process and examines some of the 1993 study's findings.

  18. Advances in Remote Sensing Approaches for Hazard Mitigation and Natural Resource Protection in Pacific Latin America: A Workshop for Advanced Graduate Students, Post- Doctoral Researchers, and Junior Faculty

    NASA Astrophysics Data System (ADS)

    Gierke, J. S.; Rose, W. I.; Waite, G. P.; Palma, J. L.; Gross, E. L.

    2008-12-01

    program in natural hazards (E-Haz). Advancements in research have been made, for example, in using thermal remote sensing methods for studying vent and eruptive processes, and in fusing RADARSAT with ASTER imagery to delineate lineaments in volcanic terrains for siting water wells. While these and other advancements are developed in conjunction with our foreign counterparts, the impacts of this work can be broadened through more comprehensive dissemination activities. Towards this end, we are in the planning phase of a Pan American workshop on applications of remote sensing techniques for natural hazards and water resources management. The workshop will be at least two weeks, sometime in July/August 2009, and involve 30-40 participants, with balanced participation from the U.S. and Latin America. In addition to fundamental aspects of remote sensing and digital image processing, the workshop topics will be presented in the context of new developments for studying volcanic processes and hazards and for characterizing groundwater systems.

  19. LEONA: Transient Luminous Event and Thunderstorm High Energy Emission Collaborative Network in Latin America

    NASA Astrophysics Data System (ADS)

    Sao Sabbas, F. T.

    2012-12-01

    this region to avoid damages due to the South Atlantic Magnetic Anomaly - SAMA. Thus this project is not only a potential benchmark in TLE research by creating a collaborative network in Latin America and nucleating this research locally, it is also strategic since LEONA's camera network will be able to provide extremely valuable information to fill up this gap that most satellite measurements have.

  20. America COMPETES at 5 years: An Analysis of Research-Intensive Universities' RCR Training Plans.

    PubMed

    Phillips, Trisha; Nestor, Franchesca; Beach, Gillian; Heitman, Elizabeth

    2017-03-15

    This project evaluates the impact of the National Science Foundation's (NSF) policy to promote education in the responsible conduct of research (RCR). To determine whether this policy resulted in meaningful RCR educational experiences, our study examined the instructional plans developed by individual universities in response to the mandate. Using a sample of 108 U.S. institutions classified as Carnegie "very high research activity", we analyzed all publicly available NSF RCR training plans in light of the consensus best practices in RCR education that were known at the time the policy was implemented. We found that fewer than half of universities developed plans that incorporated at least some of the best practices. More specifically, only 31% of universities had content and requirements that differed by career stage, only 1% of universities had content and requirements that differed by discipline; and only 18% of universities required some face-to-face engagement from all classes of trainees. Indeed, some schools simply provided hand-outs to their undergraduate students. Most universities (82%) had plans that could be satisfied with online programs such as the Collaborative Institutional Training Initiative's RCR modules. The NSF policy requires universities to develop RCR training plans, but provides no guidelines or requirements for the format, scope, content, duration, or frequency of the training, and does not hold universities accountable for their training plans. Our study shows that this vaguely worded policy, and lack of accountability, has not produced meaningful educational experiences for most of the undergraduate students, graduate students, and post-doctoral trainees funded by the NSF.

  1. A performance benchmark test for geodynamo simulations

    NASA Astrophysics Data System (ADS)

    Matsui, H.; Heien, E. M.

    2013-12-01

    In the last ten years, a number of numerical dynamo models have successfully represented basic characteristics of the geomagnetic field. As new models and numerical methods continue to be developed, it is important to update and extend benchmarks for testing these models. The first dynamo benchmark of Christensen et al. (2001) was applied to models based on spherical harmonic expansion methods. However, only a few groups have reported results of the dynamo benchmark using local methods (Harder and Hansen, 2005; Matsui and Okuda, 2005; Chan et al., 2007) because of the difficulty treating magnetic boundary conditions based on the local methods. On the other hand, spherical harmonics expansion methods perform poorly on massively parallel computers because global data communications are required for the spherical harmonics expansions to evaluate nonlinear terms. We perform benchmark tests to asses various numerical methods for the next generation of geodynamo simulations. The purpose of this benchmark test is to assess numerical geodynamo models on a massively parallel computational platform. To compare among many numerical methods as possible, we consider the model with the insulated magnetic boundary by Christensen et al. (2001) and with the pseudo vacuum magnetic boundary, because the pseudo vacuum boundaries are implemented easier by using the local method than the magnetic insulated boundaries. In the present study, we consider two kinds of benchmarks, so-called accuracy benchmark and performance benchmark. In the accuracy benchmark, we compare the dynamo models by using modest Ekman and Rayleigh numbers proposed by Christensen et. al. (2001). We investigate a required spatial resolution for each dynamo code to obtain less than 1% difference from the suggested solution of the benchmark test using the two magnetic boundary conditions. In the performance benchmark, we investigate computational performance under the same computational environment. We perform these

  2. A Seafloor Benchmark for 3-dimensional Geodesy

    NASA Astrophysics Data System (ADS)

    Chadwell, C. D.; Webb, S. C.; Nooner, S. L.

    2014-12-01

    We have developed an inexpensive, permanent seafloor benchmark to increase the longevity of seafloor geodetic measurements. The benchmark provides a physical tie to the sea floor lasting for decades (perhaps longer) on which geodetic sensors can be repeatedly placed and removed with millimeter resolution. Global coordinates estimated with seafloor geodetic techniques will remain attached to the benchmark allowing for the interchange of sensors as they fail or become obsolete, or for the sensors to be removed and used elsewhere, all the while maintaining a coherent series of positions referenced to the benchmark. The benchmark has been designed to free fall from the sea surface with transponders attached. The transponder can be recalled via an acoustic command sent from the surface to release from the benchmark and freely float to the sea surface for recovery. The duration of the sensor attachment to the benchmark will last from a few days to a few years depending on the specific needs of the experiment. The recovered sensors are then available to be reused at other locations, or again at the same site in the future. Three pins on the sensor frame mate precisely and unambiguously with three grooves on the benchmark. To reoccupy a benchmark a Remotely Operated Vehicle (ROV) uses its manipulator arm to place the sensor pins into the benchmark grooves. In June 2014 we deployed four benchmarks offshore central Oregon. We used the ROV Jason to successfully demonstrate the removal and replacement of packages onto the benchmark. We will show the benchmark design and its operational capabilities. Presently models of megathrust slip within the Cascadia Subduction Zone (CSZ) are mostly constrained by the sub-aerial GPS vectors from the Plate Boundary Observatory, a part of Earthscope. More long-lived seafloor geodetic measures are needed to better understand the earthquake and tsunami risk associated with a large rupture of the thrust fault within the Cascadia subduction zone

  3. Building America Systems Engineering Approach

    SciTech Connect

    2011-12-15

    The Building America Research Teams use a systems engineering approach to achieve higher quality and energy savings in homes. Using these techniques, the energy consumption of new houses can be reduced by 40% or more with little or no impact on the cost of ownership.

  4. New America and Community Colleges

    ERIC Educational Resources Information Center

    Miller, Ben; Fishman, Rachel; McCarthy, Mary Alice

    2015-01-01

    New America is a nonprofit, nonpartisan public policy institute that invests in new thinkers and ideas to address the next generation of challenges facing the United States. Because community college students tend to be underserved by our current higher education structure, much of our research and subsequent policy analysis and recommendations…

  5. Racially Mixed People in America.

    ERIC Educational Resources Information Center

    Root, Maria P. P., Ed.

    This book offers a comprehensive look at the social and psychological adjustment of multiracial people, models for identity development, contemporary immigration and marriage patterns, and methodological issues involved in conducting research with mixed-race people, all in the context of America's multiracial past and present. The following 26…

  6. Huntington's Disease Society of America

    MedlinePlus

    ... Andrews HDSA Researcher Spotlight- Dr. Amber Southwell Advocacy Huntington’s Disease Parity Act Affordable Care Act Social Security Administration ... Shop HDSA Events Donate Connect with us! News Huntington’s Disease Society of America AWARDS $930,000 to nine ...

  7. Arts Education in America: What the Declines Mean for Arts Participation. Based on the 2008 Survey of Public Participation in the Arts. Research Report #52

    ERIC Educational Resources Information Center

    Rabkin, Nick; Hedberg, E. C.

    2011-01-01

    The Surveys of Public Participation in the Arts (SPPAs), conducted for the National Endowment for the Arts, have shown a steady decline in the rates of adult attendance at most "benchmark" arts events--specifically, classical music and jazz concerts, musical and non-musical plays, opera, and ballet performances--as well as declines in other forms…

  8. Building America Best Practices Series: Builders Challenge Guide to 40% Whole-House Energy Savings in the Marine Climate (Volume 11)

    SciTech Connect

    Pacific Northwest National Laboratory

    2010-09-01

    With the measures described in this guide, builders in the marine climate can build homes that have whole-house energy savings of 40% over the Building America benchmark with no added overall costs for consumers.

  9. Benchmarking--Measuring and Comparing for Continuous Improvement.

    ERIC Educational Resources Information Center

    Henczel, Sue

    2002-01-01

    Discussion of benchmarking focuses on the use of internal and external benchmarking by special librarians. Highlights include defining types of benchmarking; historical development; benefits, including efficiency, improved performance, increased competitiveness, and better decision making; problems, including inappropriate adaptation; developing a…

  10. Fermilab and Latin America

    NASA Astrophysics Data System (ADS)

    Lederman, Leon M.

    2006-09-01

    As Director of Fermilab, starting in 1979, I began a series of meetings with scientists in Latin America. The motivation was to stir collaboration in the field of high energy particle physics, the central focus of Fermilab. In the next 13 years, these Pan American Symposia stirred much discussion of the use of modern physics, created several groups to do collaborative research at Fermilab, and often centralized facilities and, today, still provides the possibility for much more productive North-South collaboration in research and education. In 1992, I handed these activities over to the AAAS, as President. This would, I hoped, broaden areas of collaboration. Such collaboration is unfortunately very sensitive to political events. In a rational world, it would be the rewards, cultural and economic, of collaboration that would modulate political relations. We are not there yet.

  11. Performance Evaluation and Benchmarking of Next Intelligent Systems

    SciTech Connect

    del Pobil, Angel; Madhavan, Raj; Bonsignorio, Fabio

    2009-10-01

    Performance Evaluation and Benchmarking of Intelligent Systems presents research dedicated to the subject of performance evaluation and benchmarking of intelligent systems by drawing from the experiences and insights of leading experts gained both through theoretical development and practical implementation of intelligent systems in a variety of diverse application domains. This contributed volume offers a detailed and coherent picture of state-of-the-art, recent developments, and further research areas in intelligent systems. The chapters cover a broad range of applications, such as assistive robotics, planetary surveying, urban search and rescue, and line tracking for automotive assembly. Subsystems or components described in this book include human-robot interaction, multi-robot coordination, communications, perception, and mapping. Chapters are also devoted to simulation support and open source software for cognitive platforms, providing examples of the type of enabling underlying technologies that can help intelligent systems to propagate and increase in capabilities. Performance Evaluation and Benchmarking of Intelligent Systems serves as a professional reference for researchers and practitioners in the field. This book is also applicable to advanced courses for graduate level students and robotics professionals in a wide range of engineering and related disciplines including computer science, automotive, healthcare, manufacturing, and service robotics.

  12. A House Divided? The Psychology of Red and Blue America

    ERIC Educational Resources Information Center

    Seyle, D. Conor; Newman, Matthew L.

    2006-01-01

    Recently it has become commonplace in America for commentators and the public to use the terms "red" and "blue" to refer to perceived cultural differences in America and American politics. Although a political divide may exist in America today, these particular terms are inaccurate and reductive. This article presents research from social…

  13. ICSBEP Benchmarks For Nuclear Data Applications

    NASA Astrophysics Data System (ADS)

    Briggs, J. Blair

    2005-05-01

    The International Criticality Safety Benchmark Evaluation Project (ICSBEP) was initiated in 1992 by the United States Department of Energy. The ICSBEP became an official activity of the Organization for Economic Cooperation and Development (OECD) — Nuclear Energy Agency (NEA) in 1995. Representatives from the United States, United Kingdom, France, Japan, the Russian Federation, Hungary, Republic of Korea, Slovenia, Serbia and Montenegro (formerly Yugoslavia), Kazakhstan, Spain, Israel, Brazil, Poland, and the Czech Republic are now participating. South Africa, India, China, and Germany are considering participation. The purpose of the ICSBEP is to identify, evaluate, verify, and formally document a comprehensive and internationally peer-reviewed set of criticality safety benchmark data. The work of the ICSBEP is published as an OECD handbook entitled "International Handbook of Evaluated Criticality Safety Benchmark Experiments." The 2004 Edition of the Handbook contains benchmark specifications for 3331 critical or subcritical configurations that are intended for use in validation efforts and for testing basic nuclear data. New to the 2004 Edition of the Handbook is a draft criticality alarm / shielding type benchmark that should be finalized in 2005 along with two other similar benchmarks. The Handbook is being used extensively for nuclear data testing and is expected to be a valuable resource for code and data validation and improvement efforts for decades to come. Specific benchmarks that are useful for testing structural materials such as iron, chromium, nickel, and manganese; beryllium; lead; thorium; and 238U are highlighted.

  14. Effective File I/O Bandwidth Benchmark

    SciTech Connect

    Rabenseifner, R.; Koniges, A.E.

    2000-02-15

    The effective I/O bandwidth benchmark (b{_}eff{_}io) covers two goals: (1) to achieve a characteristic average number for the I/O bandwidth achievable with parallel MPI-I/O applications, and (2) to get detailed information about several access patterns and buffer lengths. The benchmark examines ''first write'', ''rewrite'' and ''read'' access, strided (individual and shared pointers) and segmented collective patterns on one file per application and non-collective access to one file per process. The number of parallel accessing processes is also varied and well-formed I/O is compared with non-well formed. On systems, meeting the rule that the total memory can be written to disk in 10 minutes, the benchmark should not need more than 15 minutes for a first pass of all patterns. The benchmark is designed analogously to the effective bandwidth benchmark for message passing (b{_}eff) that characterizes the message passing capabilities of a system in a few minutes. First results of the b{_}eff{_}io benchmark are given for IBM SP and Cray T3E systems and compared with existing benchmarks based on parallel Posix-I/O.

  15. Benchmarking Measures of Network Influence

    NASA Astrophysics Data System (ADS)

    Bramson, Aaron; Vandermarliere, Benjamin

    2016-09-01

    Identifying key agents for the transmission of diseases (ideas, technology, etc.) across social networks has predominantly relied on measures of centrality on a static base network or a temporally flattened graph of agent interactions. Various measures have been proposed as the best trackers of influence, such as degree centrality, betweenness, and k-shell, depending on the structure of the connectivity. We consider SIR and SIS propagation dynamics on a temporally-extruded network of observed interactions and measure the conditional marginal spread as the change in the magnitude of the infection given the removal of each agent at each time: its temporal knockout (TKO) score. We argue that this TKO score is an effective benchmark measure for evaluating the accuracy of other, often more practical, measures of influence. We find that none of the network measures applied to the induced flat graphs are accurate predictors of network propagation influence on the systems studied; however, temporal networks and the TKO measure provide the requisite targets for the search for effective predictive measures.

  16. Benchmarking pKa prediction

    PubMed Central

    Davies, Matthew N; Toseland, Christopher P; Moss, David S; Flower, Darren R

    2006-01-01

    Background pKa values are a measure of the protonation of ionizable groups in proteins. Ionizable groups are involved in intra-protein, protein-solvent and protein-ligand interactions as well as solubility, protein folding and catalytic activity. The pKa shift of a group from its intrinsic value is determined by the perturbation of the residue by the environment and can be calculated from three-dimensional structural data. Results Here we use a large dataset of experimentally-determined pKas to analyse the performance of different prediction techniques. Our work provides a benchmark of available software implementations: MCCE, MEAD, PROPKA and UHBD. Combinatorial and regression analysis is also used in an attempt to find a consensus approach towards pKa prediction. The tendency of individual programs to over- or underpredict the pKa value is related to the underlying methodology of the individual programs. Conclusion Overall, PROPKA is more accurate than the other three programs. Key to developing accurate predictive software will be a complete sampling of conformations accessible to protein structures. PMID:16749919

  17. Benchmark problems in computational aeroacoustics

    NASA Technical Reports Server (NTRS)

    Porter-Locklear, Freda

    1994-01-01

    A recent directive at NASA Langley is aimed at numerically predicting principal noise sources. During my summer stay, I worked with high-order ENO code, developed by Dr. Harold Atkins, for solving the unsteady compressible Navier-Stokes equations, as it applies to computational aeroacoustics (CAA). A CAA workshop, composed of six categories of benchmark problems, has been organized to test various numerical properties of code. My task was to determine the robustness of Atkins' code for these test problems. In one category, we tested the nonlinear wave propagation of the code for the one-dimensional Euler equations, with initial pressure, density, and velocity conditions. Using freestream boundary conditions, our results were plausible. In another category, we solved the linearized two-dimensional Euler equations to test the effectiveness of radiation boundary conditions. Here we utilized MAPLE to compute eigenvalues and eigenvectors of the Jacobian given variable and flux vectors. We experienced a minor problem with inflow and outflow boundary conditions. Next, we solved the quasi one dimensional unsteady flow equations with an incoming acoustic wave of amplitude 10(exp -6). The small amplitude sound wave was incident on a convergent-divergent nozzle. After finding a steady-state solution and then marching forward, our solution indicated that after 30 periods the acoustic wave had dissipated (a period is time required for sound wave to traverse one end of nozzle to other end).

  18. Benchmark problems in computational aeroacoustics

    NASA Astrophysics Data System (ADS)

    Porter-Locklear, Freda

    1994-12-01

    A recent directive at NASA Langley is aimed at numerically predicting principal noise sources. During my summer stay, I worked with high-order ENO code, developed by Dr. Harold Atkins, for solving the unsteady compressible Navier-Stokes equations, as it applies to computational aeroacoustics (CAA). A CAA workshop, composed of six categories of benchmark problems, has been organized to test various numerical properties of code. My task was to determine the robustness of Atkins' code for these test problems. In one category, we tested the nonlinear wave propagation of the code for the one-dimensional Euler equations, with initial pressure, density, and velocity conditions. Using freestream boundary conditions, our results were plausible. In another category, we solved the linearized two-dimensional Euler equations to test the effectiveness of radiation boundary conditions. Here we utilized MAPLE to compute eigenvalues and eigenvectors of the Jacobian given variable and flux vectors. We experienced a minor problem with inflow and outflow boundary conditions. Next, we solved the quasi one dimensional unsteady flow equations with an incoming acoustic wave of amplitude 10(exp -6). The small amplitude sound wave was incident on a convergent-divergent nozzle. After finding a steady-state solution and then marching forward, our solution indicated that after 30 periods the acoustic wave had dissipated (a period is time required for sound wave to traverse one end of nozzle to other end).

  19. Benchmarking Measures of Network Influence

    PubMed Central

    Bramson, Aaron; Vandermarliere, Benjamin

    2016-01-01

    Identifying key agents for the transmission of diseases (ideas, technology, etc.) across social networks has predominantly relied on measures of centrality on a static base network or a temporally flattened graph of agent interactions. Various measures have been proposed as the best trackers of influence, such as degree centrality, betweenness, and k-shell, depending on the structure of the connectivity. We consider SIR and SIS propagation dynamics on a temporally-extruded network of observed interactions and measure the conditional marginal spread as the change in the magnitude of the infection given the removal of each agent at each time: its temporal knockout (TKO) score. We argue that this TKO score is an effective benchmark measure for evaluating the accuracy of other, often more practical, measures of influence. We find that none of the network measures applied to the induced flat graphs are accurate predictors of network propagation influence on the systems studied; however, temporal networks and the TKO measure provide the requisite targets for the search for effective predictive measures. PMID:27670635

  20. Benchmarking for Bayesian Reinforcement Learning

    PubMed Central

    Ernst, Damien; Couëtoux, Adrien

    2016-01-01

    In the Bayesian Reinforcement Learning (BRL) setting, agents try to maximise the collected rewards while interacting with their environment while using some prior knowledge that is accessed beforehand. Many BRL algorithms have already been proposed, but the benchmarks used to compare them are only relevant for specific cases. The paper addresses this problem, and provides a new BRL comparison methodology along with the corresponding open source library. In this methodology, a comparison criterion that measures the performance of algorithms on large sets of Markov Decision Processes (MDPs) drawn from some probability distributions is defined. In order to enable the comparison of non-anytime algorithms, our methodology also includes a detailed analysis of the computation time requirement of each algorithm. Our library is released with all source code and documentation: it includes three test problems, each of which has two different prior distributions, and seven state-of-the-art RL algorithms. Finally, our library is illustrated by comparing all the available algorithms and the results are discussed. PMID:27304891

  1. Benchmarking for Bayesian Reinforcement Learning.

    PubMed

    Castronovo, Michael; Ernst, Damien; Couëtoux, Adrien; Fonteneau, Raphael

    2016-01-01

    In the Bayesian Reinforcement Learning (BRL) setting, agents try to maximise the collected rewards while interacting with their environment while using some prior knowledge that is accessed beforehand. Many BRL algorithms have already been proposed, but the benchmarks used to compare them are only relevant for specific cases. The paper addresses this problem, and provides a new BRL comparison methodology along with the corresponding open source library. In this methodology, a comparison criterion that measures the performance of algorithms on large sets of Markov Decision Processes (MDPs) drawn from some probability distributions is defined. In order to enable the comparison of non-anytime algorithms, our methodology also includes a detailed analysis of the computation time requirement of each algorithm. Our library is released with all source code and documentation: it includes three test problems, each of which has two different prior distributions, and seven state-of-the-art RL algorithms. Finally, our library is illustrated by comparing all the available algorithms and the results are discussed.

  2. Developing integrated benchmarks for DOE performance measurement

    SciTech Connect

    Barancik, J.I.; Kramer, C.F.; Thode, Jr. H.C.

    1992-09-30

    The objectives of this task were to describe and evaluate selected existing sources of information on occupational safety and health with emphasis on hazard and exposure assessment, abatement, training, reporting, and control identifying for exposure and outcome in preparation for developing DOE performance benchmarks. Existing resources and methodologies were assessed for their potential use as practical performance benchmarks. Strengths and limitations of current data resources were identified. Guidelines were outlined for developing new or improved performance factors, which then could become the basis for selecting performance benchmarks. Data bases for non-DOE comparison populations were identified so that DOE performance could be assessed relative to non-DOE occupational and industrial groups. Systems approaches were described which can be used to link hazards and exposure, event occurrence, and adverse outcome factors, as needed to generate valid, reliable, and predictive performance benchmarks. Data bases were identified which contain information relevant to one or more performance assessment categories . A list of 72 potential performance benchmarks was prepared to illustrate the kinds of information that can be produced through a benchmark development program. Current information resources which may be used to develop potential performance benchmarks are limited. There is need to develop an occupational safety and health information and data system in DOE, which is capable of incorporating demonstrated and documented performance benchmarks prior to, or concurrent with the development of hardware and software. A key to the success of this systems approach is rigorous development and demonstration of performance benchmark equivalents to users of such data before system hardware and software commitments are institutionalized.

  3. Clinically meaningful performance benchmarks in MS

    PubMed Central

    Motl, Robert W.; Scagnelli, John; Pula, John H.; Sosnoff, Jacob J.; Cadavid, Diego

    2013-01-01

    Objective: Identify and validate clinically meaningful Timed 25-Foot Walk (T25FW) performance benchmarks in individuals living with multiple sclerosis (MS). Methods: Cross-sectional study of 159 MS patients first identified candidate T25FW benchmarks. To characterize the clinical meaningfulness of T25FW benchmarks, we ascertained their relationships to real-life anchors, functional independence, and physiologic measurements of gait and disease progression. Candidate T25FW benchmarks were then prospectively validated in 95 subjects using 13 measures of ambulation and cognition, patient-reported outcomes, and optical coherence tomography. Results: T25FW of 6 to 7.99 seconds was associated with a change in occupation due to MS, occupational disability, walking with a cane, and needing “some help” with instrumental activities of daily living; T25FW ≥8 seconds was associated with collecting Supplemental Security Income and government health care, walking with a walker, and inability to do instrumental activities of daily living. During prospective benchmark validation, we trichotomized data by T25FW benchmarks (<6 seconds, 6–7.99 seconds, and ≥8 seconds) and found group main effects on 12 of 13 objective and subjective measures (p < 0.05). Conclusions: Using a cross-sectional design, we identified 2 clinically meaningful T25FW benchmarks of ≥6 seconds (6–7.99) and ≥8 seconds. Longitudinal and larger studies are needed to confirm the clinical utility and relevance of these proposed T25FW benchmarks and to parse out whether there are additional benchmarks in the lower (<6 seconds) and higher (>10 seconds) ranges of performance. PMID:24174581

  4. Population normative data for the 10/66 Dementia Research Group cognitive test battery from Latin America, India and China: a cross-sectional survey

    PubMed Central

    Sosa, Ana Luisa; Albanese, Emiliano; Prince, Martin; Acosta, Daisy; Ferri, Cleusa P; Guerra, Mariella; Huang, Yueqin; Jacob, KS; de Rodriguez, Juan Llibre; Salas, Aquiles; Yang, Fang; Gaona, Ciro; Joteeshwaran, AT; Rodriguez, Guillermina; de la Torre, Gabriela Rojas; Williams, Joseph D; Stewart, Robert

    2009-01-01

    Background 1) To report site-specific normative values by age, sex and educational level for four components of the 10/66 Dementia Research Group cognitive test battery; 2) to estimate the main and interactive effects of age, sex, and educational level by site; and 3) to investigate the effect of site by region and by rural or urban location. Methods Population-based cross-sectional one phase catchment area surveys were conducted in Cuba, Dominican Republic, Venezuela, Peru, Mexico, China and India. The protocol included the administration of the Community Screening Instrument for Dementia (CSI 'D', generating the COGSCORE measure of global function), and the Consortium to Establish a Registry for Alzheimer's Disease (CERAD) verbal fluency (VF), word list memory (WLM, immediate recall) and recall (WLR, delayed recall) tests. Only those free of dementia were included in the analysis. Results Older people, and those with less education performed worse on all four tests. The effect of sex was much smaller and less consistent. There was a considerable effect of site after accounting for compositional differences in age, education and sex. Much of this was accounted for by the effect of region with Chinese participants performing better, and Indian participants worse, than those from Latin America. The effect of region was more prominent for VF and WLM than for COGSCORE and WLR. Conclusion Cognitive assessment is a basic element for dementia diagnosis. Age- and education-specific norms are required for this purpose, while the effect of gender can probably be ignored. The basis of cultural effects is poorly understood, but our findings serve to emphasise that normative data may not be safely generalised from one population to another with quite different characteristics. The minimal effects of region on COGSCORE and WLR are reassuring with respect to the cross-cultural validity of the 10/66 dementia diagnosis, which uses only these elements of the 10/66 battery. PMID

  5. Status of the Prevention of Multidrug-Resistant Organisms in International Settings: A Survey of the Society for Healthcare Epidemiology of America Research Network.

    PubMed

    Safdar, Nasia; Sengupta, Sharmila; Musuuza, Jackson S; Juthani-Mehta, Manisha; Drees, Marci; Abbo, Lilian M; Milstone, Aaron M; Furuno, Jon P; Varman, Meera; Anderson, Deverick J; Morgan, Daniel J; Miller, Loren G; Snyder, Graham M

    2017-01-01

    OBJECTIVE To examine self-reported practices and policies to reduce infection and transmission of multidrug-resistant organisms (MDRO) in healthcare settings outside the United States. DESIGN Cross-sectional survey. PARTICIPANTS International members of the Society for Healthcare Epidemiology of America (SHEA) Research Network. METHODS Electronic survey of infection control and prevention practices, capabilities, and barriers outside the United States and Canada. Participants were stratified according to their country's economic development status as defined by the World Bank as low-income, lower-middle-income, upper-middle-income, and high-income. RESULTS A total of 76 respondents (33%) of 229 SHEA members outside the United States and Canada completed the survey questionnaire, representing 30 countries. Forty (53%) were high-, 33 (43%) were middle-, and 1 (1%) was a low-income country. Country data were missing for 2 respondents (3%). Of the 76 respondents, 64 (84%) reported having a formal or informal antibiotic stewardship program at their institution. High-income countries were more likely than middle-income countries to have existing MDRO policies (39/64 [61%] vs 25/64 [39%], P=.003) and to place patients with MDRO in contact precautions (40/72 [56%] vs 31/72 [44%], P=.05). Major barriers to preventing MDRO transmission included constrained resources (infrastructure, supplies, and trained staff) and challenges in changing provider behavior. CONCLUSIONS In this survey, a substantial proportion of institutions reported encountering barriers to implementing key MDRO prevention strategies. Interventions to address capacity building internationally are urgently needed. Data on the infection prevention practices of low income countries are needed. Infect Control Hosp Epidemiol. 2016:1-8.

  6. TRANSLATING ECOLOGY, PHYSIOLOGY, BIOCHEMISTRY, AND POPULATION GENETICS RESEARCH TO MEET THE CHALLENGE OF TICK AND TICK-BORNE DISEASES IN NORTH AMERICA.

    PubMed

    Esteve-Gassent, Maria D; Castro-Arellano, Ivan; Feria-Arroyo, Teresa P; Patino, Ramiro; Li, Andrew Y; Medina, Raul F; de León, Adalberto A Pérez; Rodríguez-Vivas, Roger Iván

    2016-05-01

    Emerging and re-emerging tick-borne diseases threaten public health and the wellbeing of domestic animals and wildlife globally. The adoption of an evolutionary ecology framework aimed to diminish the impact of tick-borne diseases needs to be part of strategies to protect human and animal populations. We present a review of current knowledge on the adaptation of ticks to their environment, and the impact that global change could have on their geographic distribution in North America. Environmental pressures will affect tick population genetics by selecting genotypes able to withstand new and changing environments and by altering the connectivity and isolation of several tick populations. Research in these areas is particularly lacking in the southern United States and most of Mexico with knowledge gaps on the ecology of these diseases, including a void in the identity of reservoir hosts for several tick-borne pathogens. Additionally, the way in which anthropogenic changes to landscapes may influence tick-borne disease ecology remains to be fully understood. Enhanced knowledge in these areas is needed in order to implement effective and sustainable integrated tick management strategies. We propose to refocus ecology studies with emphasis on metacommunity-based approaches to enable a holistic perspective addressing whole pathogen and host assemblages. Network analyses could be used to develop mechanistic models involving multihost-pathogen communities. An increase in our understanding of the ecology of tick-borne diseases across their geographic distribution will aid in the design of effective area-wide tick control strategies aimed to diminish the burden of pathogens transmitted by ticks.

  7. Latin America, Ibero-America, Hispanic America, Spanish America, Portuguese America: A Proposal for Terminological Standardization

    ERIC Educational Resources Information Center

    Gold, David L.

    1977-01-01

    Latin America is the most widely used of the terms, but a more precise designation, at least in scholarly usage, would be useful. Distinctions are made on the basis of language in favor of the terms in the title. A bibliographical note for further reference is appended. (AMH)

  8. Updates to the integrated protein-protein interaction benchmarks: Docking benchmark version 5 and affinity benchmark version 2

    PubMed Central

    Vreven, Thom; Moal, Iain H.; Vangone, Anna; Pierce, Brian G.; Kastritis, Panagiotis L.; Torchala, Mieczyslaw; Chaleil, Raphael; Jiménez-García, Brian; Bates, Paul A.; Fernandez-Recio, Juan; Bonvin, Alexandre M.J.J.; Weng, Zhiping

    2015-01-01

    We present an updated and integrated version of our widely used protein-protein docking and binding affinity benchmarks. The benchmarks consist of non-redundant, high quality structures of protein-protein complexes along with the unbound structures of their components. Fifty-five new complexes were added to the docking benchmark, 35 of which have experimentally-measured binding affinities. These updated docking and affinity benchmarks now contain 230 and 179 entries, respectively. In particular, the number of antibody-antigen complexes has increased significantly, by 67% and 74% in the docking and affinity benchmarks, respectively. We tested previously developed docking and affinity prediction algorithms on the new cases. Considering only the top ten docking predictions per benchmark case, a prediction accuracy of 38% is achieved on all 55 cases, and up to 50% for the 32 rigid-body cases only. Predicted affinity scores are found to correlate with experimental binding energies up to r=0.52 overall, and r=0.72 for the rigid complexes. PMID:26231283

  9. Updates to the Integrated Protein-Protein Interaction Benchmarks: Docking Benchmark Version 5 and Affinity Benchmark Version 2.

    PubMed

    Vreven, Thom; Moal, Iain H; Vangone, Anna; Pierce, Brian G; Kastritis, Panagiotis L; Torchala, Mieczyslaw; Chaleil, Raphael; Jiménez-García, Brian; Bates, Paul A; Fernandez-Recio, Juan; Bonvin, Alexandre M J J; Weng, Zhiping

    2015-09-25

    We present an updated and integrated version of our widely used protein-protein docking and binding affinity benchmarks. The benchmarks consist of non-redundant, high-quality structures of protein-protein complexes along with the unbound structures of their components. Fifty-five new complexes were added to the docking benchmark, 35 of which have experimentally measured binding affinities. These updated docking and affinity benchmarks now contain 230 and 179 entries, respectively. In particular, the number of antibody-antigen complexes has increased significantly, by 67% and 74% in the docking and affinity benchmarks, respectively. We tested previously developed docking and affinity prediction algorithms on the new cases. Considering only the top 10 docking predictions per benchmark case, a prediction accuracy of 38% is achieved on all 55 cases and up to 50% for the 32 rigid-body cases only. Predicted affinity scores are found to correlate with experimental binding energies up to r=0.52 overall and r=0.72 for the rigid complexes.

  10. The skyshine benchmark experiment revisited.

    PubMed

    Terry, Ian R

    2005-01-01

    With the coming renaissance of nuclear power, heralded by new nuclear power plant construction in Finland, the issue of qualifying modern tools for calculation becomes prominent. Among the calculations required may be the determination of radiation levels outside the plant owing to skyshine. For example, knowledge of the degree of accuracy in the calculation of gamma skyshine through the turbine hall roof of a BWR plant is important. Modern survey programs which can calculate skyshine dose rates tend to be qualified only by verification with the results of Monte Carlo calculations. However, in the past, exacting experimental work has been performed in the field for gamma skyshine, notably the benchmark work in 1981 by Shultis and co-workers, which considered not just the open source case but also the effects of placing a concrete roof above the source enclosure. The latter case is a better reflection of reality as safety considerations nearly always require the source to be shielded in some way, usually by substantial walls but by a thinner roof. One of the tools developed since that time, which can both calculate skyshine radiation and accurately model the geometrical set-up of an experiment, is the code RANKERN, which is used by Framatome ANP and other organisations for general shielding design work. The following description concerns the use of this code to re-address the experimental results from 1981. This then provides a realistic gauge to validate, but also to set limits on, the program for future gamma skyshine applications within the applicable licensing procedures for all users of the code.

  11. Single pin BWR benchmark problem for coupled Monte Carlo - Thermal hydraulics analysis

    SciTech Connect

    Ivanov, A.; Sanchez, V.; Hoogenboom, J. E.

    2012-07-01

    As part of the European NURISP research project, a single pin BWR benchmark problem was defined. The aim of this initiative is to test the coupling strategies between Monte Carlo and subchannel codes developed by different project participants. In this paper the results obtained by the Delft Univ. of Technology and Karlsruhe Inst. of Technology will be presented. The benchmark problem was simulated with the following coupled codes: TRIPOLI-SUBCHANFLOW, MCNP-FLICA, MCNP-SUBCHANFLOW, and KENO-SUBCHANFLOW. (authors)

  12. XWeB: The XML Warehouse Benchmark

    NASA Astrophysics Data System (ADS)

    Mahboubi, Hadj; Darmont, Jérôme

    With the emergence of XML as a standard for representing business data, new decision support applications are being developed. These XML data warehouses aim at supporting On-Line Analytical Processing (OLAP) operations that manipulate irregular XML data. To ensure feasibility of these new tools, important performance issues must be addressed. Performance is customarily assessed with the help of benchmarks. However, decision support benchmarks do not currently support XML features. In this paper, we introduce the XML Warehouse Benchmark (XWeB), which aims at filling this gap. XWeB derives from the relational decision support benchmark TPC-H. It is mainly composed of a test data warehouse that is based on a unified reference model for XML warehouses and that features XML-specific structures, and its associate XQuery decision support workload. XWeB's usage is illustrated by experiments on several XML database management systems.

  13. NAS Grid Benchmarks. 1.0

    NASA Technical Reports Server (NTRS)

    VanderWijngaart, Rob; Frumkin, Michael; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    We provide a paper-and-pencil specification of a benchmark suite for computational grids. It is based on the NAS (NASA Advanced Supercomputing) Parallel Benchmarks (NPB) and is called the NAS Grid Benchmarks (NGB). NGB problems are presented as data flow graphs encapsulating an instance of a slightly modified NPB task in each graph node, which communicates with other nodes by sending/receiving initialization data. Like NPB, NGB specifies several different classes (problem sizes). In this report we describe classes S, W, and A, and provide verification values for each. The implementor has the freedom to choose any language, grid environment, security model, fault tolerance/error correction mechanism, etc., as long as the resulting implementation passes the verification test and reports the turnaround time of the benchmark.

  14. Social benchmarking to improve river ecosystems.

    PubMed

    Cary, John; Pisarski, Anne

    2011-01-01

    To complement physical measures or indices of river health a social benchmarking instrument has been developed to measure community dispositions and behaviour regarding river health. This instrument seeks to achieve three outcomes. First, to provide a benchmark of the social condition of communities' attitudes, values, understanding and behaviours in relation to river health; second, to provide information for developing management and educational priorities; and third, to provide an assessment of the long-term effectiveness of community education and engagement activities in achieving changes in attitudes, understanding and behaviours in relation to river health. In this paper the development of the social benchmarking instrument is described and results are presented from the first state-wide benchmark study in Victoria, Australia, in which the social dimensions of river health, community behaviours related to rivers, and community understanding of human impacts on rivers were assessed.

  15. Benchmarking ENDF/B-VII.0

    NASA Astrophysics Data System (ADS)

    van der Marck, Steven C.

    2006-12-01

    The new major release VII.0 of the ENDF/B nuclear data library has been tested extensively using benchmark calculations. These were based upon MCNP-4C3 continuous-energy Monte Carlo neutronics simulations, together with nuclear data processed using the code NJOY. Three types of benchmarks were used, viz., criticality safety benchmarks, (fusion) shielding benchmarks, and reference systems for which the effective delayed neutron fraction is reported. For criticality safety, more than 700 benchmarks from the International Handbook of Criticality Safety Benchmark Experiments were used. Benchmarks from all categories were used, ranging from low-enriched uranium, compound fuel, thermal spectrum ones (LEU-COMP-THERM), to mixed uranium-plutonium, metallic fuel, fast spectrum ones (MIX-MET-FAST). For fusion shielding many benchmarks were based on IAEA specifications for the Oktavian experiments (for Al, Co, Cr, Cu, LiF, Mn, Mo, Si, Ti, W, Zr), Fusion Neutronics Source in Japan (for Be, C, N, O, Fe, Pb), and Pulsed Sphere experiments at Lawrence Livermore National Laboratory (for 6Li, 7Li, Be, C, N, O, Mg, Al, Ti, Fe, Pb, D 2O, H 2O, concrete, polyethylene and teflon). For testing delayed neutron data more than thirty measurements in widely varying systems were used. Among these were measurements in the Tank Critical Assembly (TCA in Japan) and IPEN/MB-01 (Brazil), both with a thermal spectrum, and two cores in Masurca (France) and three cores in the Fast Critical Assembly (FCA, Japan), all with fast spectra. In criticality safety, many benchmarks were chosen from the category with a thermal spectrum, low-enriched uranium, compound fuel (LEU-COMP-THERM), because this is typical of most current-day reactors, and because these benchmarks were previously underpredicted by as much as 0.5% by most nuclear data libraries (such as ENDF/B-VI.8, JEFF-3.0). The calculated results presented here show that this underprediction is no longer there for ENDF/B-VII.0. The average over 257

  16. Aquatic Life Benchmarks for Pesticide Registration

    EPA Pesticide Factsheets

    Each Aquatic Life Benchmark is based on the most sensitive, scientifically acceptable toxicity endpoint available to EPA for a given taxon (for example, freshwater fish) of all scientifically acceptable toxicity data available to EPA.

  17. Third Computational Aeroacoustics (CAA) Workshop on Benchmark Problems

    NASA Technical Reports Server (NTRS)

    Dahl, Milo D. (Editor)

    2000-01-01

    The proceedings of the Third Computational Aeroacoustics (CAA) Workshop on Benchmark Problems cosponsored by the Ohio Aerospace Institute and the NASA Glenn Research Center are the subject of this report. Fan noise was the chosen theme for this workshop with representative problems encompassing four of the six benchmark problem categories. The other two categories were related to jet noise and cavity noise. For the first time in this series of workshops, the computational results for the cavity noise problem were compared to experimental data. All the other problems had exact solutions, which are included in this report. The Workshop included a panel discussion by representatives of industry. The participants gave their views on the status of applying computational aeroacoustics to solve practical industry related problems and what issues need to be addressed to make CAA a robust design tool.

  18. Systematic Benchmarking of Diagnostic Technologies for an Electrical Power System

    NASA Technical Reports Server (NTRS)

    Kurtoglu, Tolga; Jensen, David; Poll, Scott

    2009-01-01

    Automated health management is a critical functionality for complex aerospace systems. A wide variety of diagnostic algorithms have been developed to address this technical challenge. Unfortunately, the lack of support to perform large-scale V&V (verification and validation) of diagnostic technologies continues to create barriers to effective development and deployment of such algorithms for aerospace vehicles. In this paper, we describe a formal framework developed for benchmarking of diagnostic technologies. The diagnosed system is the Advanced Diagnostics and Prognostics Testbed (ADAPT), a real-world electrical power system (EPS), developed and maintained at the NASA Ames Research Center. The benchmarking approach provides a systematic, empirical basis to the testing of diagnostic software and is used to provide performance assessment for different diagnostic algorithms.

  19. Synthetic benchmarks for machine olfaction: Classification, segmentation and sensor damage☆

    PubMed Central

    Ziyatdinov, Andrey; Perera, Alexandre

    2015-01-01

    The design of the signal and data processing algorithms requires a validation stage and some data relevant for a validation procedure. While the practice to share public data sets and make use of them is a recent and still on-going activity in the community, the synthetic benchmarks presented here are an option for the researches, who need data for testing and comparing the algorithms under development. The collection of synthetic benchmark data sets were generated for classification, segmentation and sensor damage scenarios, each defined at 5 difficulty levels. The published data are related to the data simulation tool, which was used to create a virtual array of 1020 sensors with a default set of parameters [1]. PMID:26217732

  20. A benchmark for fault tolerant flight control evaluation

    NASA Astrophysics Data System (ADS)

    Smaili, H.; Breeman, J.; Lombaerts, T.; Stroosma, O.

    2013-12-01

    A large transport aircraft simulation benchmark (REconfigurable COntrol for Vehicle Emergency Return - RECOVER) has been developed within the GARTEUR (Group for Aeronautical Research and Technology in Europe) Flight Mechanics Action Group 16 (FM-AG(16)) on Fault Tolerant Control (2004 2008) for the integrated evaluation of fault detection and identification (FDI) and reconfigurable flight control strategies. The benchmark includes a suitable set of assessment criteria and failure cases, based on reconstructed accident scenarios, to assess the potential of new adaptive control strategies to improve aircraft survivability. The application of reconstruction and modeling techniques, based on accident flight data, has resulted in high-fidelity nonlinear aircraft and fault models to evaluate new Fault Tolerant Flight Control (FTFC) concepts and their real-time performance to accommodate in-flight failures.

  1. Benchmarking strategies for measuring the quality of healthcare: problems and prospects.

    PubMed

    Lovaglio, Pietro Giorgio

    2012-01-01

    Over the last few years, increasing attention has been directed toward the problems inherent to measuring the quality of healthcare and implementing benchmarking strategies. Besides offering accreditation and certification processes, recent approaches measure the performance of healthcare institutions in order to evaluate their effectiveness, defined as the capacity to provide treatment that modifies and improves the patient's state of health. This paper, dealing with hospital effectiveness, focuses on research methods for effectiveness analyses within a strategy comparing different healthcare institutions. The paper, after having introduced readers to the principle debates on benchmarking strategies, which depend on the perspective and type of indicators used, focuses on the methodological problems related to performing consistent benchmarking analyses. Particularly, statistical methods suitable for controlling case-mix, analyzing aggregate data, rare events, and continuous outcomes measured with error are examined. Specific challenges of benchmarking strategies, such as the risk of risk adjustment (case-mix fallacy, underreporting, risk of comparing noncomparable hospitals), selection bias, and possible strategies for the development of consistent benchmarking analyses, are discussed. Finally, to demonstrate the feasibility of the illustrated benchmarking strategies, an application focused on determining regional benchmarks for patient satisfaction (using 2009 Lombardy Region Patient Satisfaction Questionnaire) is proposed.

  2. Benchmarking real-time HEVC streaming

    NASA Astrophysics Data System (ADS)

    Nightingale, James; Wang, Qi; Grecos, Christos

    2012-06-01

    reduction in terms of PSNR which results from a reduction in available bandwidth. To the best of our knowledge, this is the first time that such a fully functional streaming system for HEVC, together with the benchmark evaluation results, has been reported. This study will open up more timely research opportunities in this cutting edge area.

  3. Benchmark for license plate character segmentation

    NASA Astrophysics Data System (ADS)

    Gonçalves, Gabriel Resende; da Silva, Sirlene Pio Gomes; Menotti, David; Shwartz, William Robson

    2016-09-01

    Automatic license plate recognition (ALPR) has been the focus of many researches in the past years. In general, ALPR is divided into the following problems: detection of on-track vehicles, license plate detection, segmentation of license plate characters, and optical character recognition (OCR). Even though commercial solutions are available for controlled acquisition conditions, e.g., the entrance of a parking lot, ALPR is still an open problem when dealing with data acquired from uncontrolled environments, such as roads and highways when relying only on imaging sensors. Due to the multiple orientations and scales of the license plates captured by the camera, a very challenging task of the ALPR is the license plate character segmentation (LPCS) step, because its effectiveness is required to be (near) optimal to achieve a high recognition rate by the OCR. To tackle the LPCS problem, this work proposes a benchmark composed of a dataset designed to focus specifically on the character segmentation step of the ALPR within an evaluation protocol. Furthermore, we propose the Jaccard-centroid coefficient, an evaluation measure more suitable than the Jaccard coefficient regarding the location of the bounding box within the ground-truth annotation. The dataset is composed of 2000 Brazilian license plates consisting of 14000 alphanumeric symbols and their corresponding bounding box annotations. We also present a straightforward approach to perform LPCS efficiently. Finally, we provide an experimental evaluation for the dataset based on five LPCS approaches and demonstrate the importance of character segmentation for achieving an accurate OCR.

  4. The Consortium of Advanced Residential Buildings (CARB) - A Building America Energy Efficient Housing Partnership

    SciTech Connect

    Robb Aldrich; Lois Arena; Dianne Griffiths; Srikanth Puttagunta; David Springer

    2010-12-31

    This final report summarizes the work conducted by the Consortium of Advanced Residential Buildings (CARB) (http://www.carb-swa.com/), one of the 'Building America Energy Efficient Housing Partnership' Industry Teams, for the period January 1, 2008 to December 31, 2010. The Building America Program (BAP) is part of the Department of Energy (DOE), Energy Efficiency and Renewable Energy, Building Technologies Program (BTP). The long term goal of the BAP is to develop cost effective, production ready systems in five major climate zones that will result in zero energy homes (ZEH) that produce as much energy as they use on an annual basis by 2020. CARB is led by Steven Winter Associates, Inc. with Davis Energy Group, Inc. (DEG), MaGrann Associates, and Johnson Research, LLC as team members. In partnership with our numerous builders and industry partners, work was performed in three primary areas - advanced systems research, prototype home development, and technical support for communities of high performance homes. Our advanced systems research work focuses on developing a better understanding of the installed performance of advanced technology systems when integrated in a whole-house scenario. Technology systems researched included: - High-R Wall Assemblies - Non-Ducted Air-Source Heat Pumps - Low-Load HVAC Systems - Solar Thermal Water Heating - Ventilation Systems - Cold-Climate Ground and Air Source Heat Pumps - Hot/Dry Climate Air-to-Water Heat Pump - Condensing Boilers - Evaporative condensers - Water Heating CARB continued to support several prototype home projects in the design and specification phase. These projects are located in all five program climate regions and most are targeting greater than 50% source energy savings over the Building America Benchmark home. CARB provided technical support and developed builder project case studies to be included in near-term Joule Milestone reports for the following community scale projects: - SBER Overlook at Clipper

  5. The MCNP6 Analytic Criticality Benchmark Suite

    SciTech Connect

    Brown, Forrest B.

    2016-06-16

    Analytical benchmarks provide an invaluable tool for verifying computer codes used to simulate neutron transport. Several collections of analytical benchmark problems [1-4] are used routinely in the verification of production Monte Carlo codes such as MCNP® [5,6]. Verification of a computer code is a necessary prerequisite to the more complex validation process. The verification process confirms that a code performs its intended functions correctly. The validation process involves determining the absolute accuracy of code results vs. nature. In typical validations, results are computed for a set of benchmark experiments using a particular methodology (code, cross-section data with uncertainties, and modeling) and compared to the measured results from the set of benchmark experiments. The validation process determines bias, bias uncertainty, and possibly additional margins. Verification is generally performed by the code developers, while validation is generally performed by code users for a particular application space. The VERIFICATION_KEFF suite of criticality problems [1,2] was originally a set of 75 criticality problems found in the literature for which exact analytical solutions are available. Even though the spatial and energy detail is necessarily limited in analytical benchmarks, typically to a few regions or energy groups, the exact solutions obtained can be used to verify that the basic algorithms, mathematics, and methods used in complex production codes perform correctly. The present work has focused on revisiting this benchmark suite. A thorough review of the problems resulted in discarding some of them as not suitable for MCNP benchmarking. For the remaining problems, many of them were reformulated to permit execution in either multigroup mode or in the normal continuous-energy mode for MCNP. Execution of the benchmarks in continuous-energy mode provides a significant advance to MCNP verification methods.

  6. Simple Benchmark Specifications for Space Radiation Protection

    NASA Technical Reports Server (NTRS)

    Singleterry, Robert C. Jr.; Aghara, Sukesh K.

    2013-01-01

    This report defines space radiation benchmark specifications. This specification starts with simple, monoenergetic, mono-directional particles on slabs and progresses to human models in spacecraft. This report specifies the models and sources needed to what the team performing the benchmark needs to produce in a report. Also included are brief descriptions of how OLTARIS, the NASA Langley website for space radiation analysis, performs its analysis.

  7. Implementation of NAS Parallel Benchmarks in Java

    NASA Technical Reports Server (NTRS)

    Frumkin, Michael; Schultz, Matthew; Jin, Hao-Qiang; Yan, Jerry

    2000-01-01

    A number of features make Java an attractive but a debatable choice for High Performance Computing (HPC). In order to gauge the applicability of Java to the Computational Fluid Dynamics (CFD) we have implemented NAS Parallel Benchmarks in Java. The performance and scalability of the benchmarks point out the areas where improvement in Java compiler technology and in Java thread implementation would move Java closer to Fortran in the competition for CFD applications.

  8. Benchmarking for Cost Improvement. Final report

    SciTech Connect

    Not Available

    1993-09-01

    The US Department of Energy`s (DOE) Office of Environmental Restoration and Waste Management (EM) conducted the Benchmarking for Cost Improvement initiative with three objectives: Pilot test benchmarking as an EM cost improvement tool; identify areas for cost improvement and recommend actions to address these areas; provide a framework for future cost improvement. The benchmarking initiative featured the use of four principal methods (program classification, nationwide cost improvement survey, paired cost comparison and component benchmarking). Interested parties contributed during both the design and execution phases. The benchmarking initiative was conducted on an accelerated basis. Of necessity, it considered only a limited set of data that may not be fully representative of the diverse and complex conditions found at the many DOE installations. The initiative generated preliminary data about cost differences and it found a high degree of convergence on several issues. Based on this convergence, the report recommends cost improvement strategies and actions. This report describes the steps taken as part of the benchmarking initiative and discusses the findings and recommended actions for achieving cost improvement. The results and summary recommendations, reported below, are organized by the study objectives.

  9. Machine characterization and benchmark performance prediction

    NASA Technical Reports Server (NTRS)

    Saavedra-Barrera, Rafael H.

    1988-01-01

    From runs of standard benchmarks or benchmark suites, it is not possible to characterize the machine nor to predict the run time of other benchmarks which have not been run. A new approach to benchmarking and machine characterization is reported. The creation and use of a machine analyzer is described, which measures the performance of a given machine on FORTRAN source language constructs. The machine analyzer yields a set of parameters which characterize the machine and spotlight its strong and weak points. Also described is a program analyzer, which analyzes FORTRAN programs and determines the frequency of execution of each of the same set of source language operations. It is then shown that by combining a machine characterization and a program characterization, we are able to predict with good accuracy the run time of a given benchmark on a given machine. Characterizations are provided for the Cray-X-MP/48, Cyber 205, IBM 3090/200, Amdahl 5840, Convex C-1, VAX 8600, VAX 11/785, VAX 11/780, SUN 3/50, and IBM RT-PC/125, and for the following benchmark programs or suites: Los Alamos (BMK8A1), Baskett, Linpack, Livermore Loops, Madelbrot Set, NAS Kernels, Shell Sort, Smith, Whetstone and Sieve of Erathostenes.

  10. The Teach for America RockCorps, Year 1: Turning Authentic Research Experiences in Geophysics for STEM Teachers into Modeling Instruction™ in High School Classrooms

    NASA Astrophysics Data System (ADS)

    Garrison, D. R., Jr.; Neubauer, H.; Barber, T. J.; Griffith, W. A.

    2015-12-01

    National reform efforts such as the Next Generation Science Standards, Modeling Instruction™, and Project Lead the Way (PLTW) seek to more closely align K-12 students' STEM learning experiences with the practices of scientific and engineering inquiry. These reform efforts aim to lead students toward deeper understandings constructed through authentic scientific and engineering inquiry in classrooms, particularly via model building and testing, more closely mirroring the professional practice of scientists and engineers, whereas traditional instructional approaches have typically been lecture-driven. In this vein, we describe the approach taken in the first year of the Teach for America (TFA) RockCorps, a five-year, NSF-sponsored project designed to provide authentic research experiences for secondary teachers and foster the development of Geophysics-themed teaching materials through cooperative lesson plan development and purchase of scientific equipment. Initially, two teachers were selected from the local Dallas-Fort Worth Region of TFA to participate in original research studying the failure of rocks under impulsive loads using a Split-Hopkinson-Pressure Bar (SHPB). For the teachers, this work provides a context from which to derive Geophysics-themed lesson plans for their courses, Physics/Pre-AP and Principles of Engineering (POE), offered at two large public high schools in Dallas ISD. The Physics course will incorporate principles of seismic wave propagation to allow students to develop a model of wave behavior, including velocity, refraction, and resonance, and apply the model to predict propagation properties of a variety of waves through multiple media. For the PLTW POE course, tension and compression testing of a variety of rock samples will be incorporated into materials properties and testing units. Also, a project will give a group of seniors in the PLTW Engineering Design and Development course at this certified NAF Academy of Engineering the

  11. OPTIMIZATION OF MUD HAMMER DRILLING PERFORMANCE - A PROGRAM TO BENCHMARK THE VIABILITY OF ADVANCED MUD HAMMER DRILLING

    SciTech Connect

    Arnis Judzis

    2003-01-01

    This document details the progress to date on the ''OPTIMIZATION OF MUD HAMMER DRILLING PERFORMANCE -- A PROGRAM TO BENCHMARK THE VIABILITY OF ADVANCED MUD HAMMER DRILLING'' contract for the quarter starting October 2002 through December 2002. Even though we are awaiting the optimization portion of the testing program, accomplishments included the following: (1) Smith International participated in the DOE Mud Hammer program through full scale benchmarking testing during the week of 4 November 2003. (2) TerraTek acknowledges Smith International, BP America, PDVSA, and ConocoPhillips for cost-sharing the Smith benchmarking tests allowing extension of the contract to add to the benchmarking testing program. (3) Following the benchmark testing of the Smith International hammer, representatives from DOE/NETL, TerraTek, Smith International and PDVSA met at TerraTek in Salt Lake City to review observations, performance and views on the optimization step for 2003. (4) The December 2002 issue of Journal of Petroleum Technology (Society of Petroleum Engineers) highlighted the DOE fluid hammer testing program and reviewed last years paper on the benchmark performance of the SDS Digger and Novatek hammers. (5) TerraTek's Sid Green presented a technical review for DOE/NETL personnel in Morgantown on ''Impact Rock Breakage'' and its importance on improving fluid hammer performance. Much discussion has taken place on the issues surrounding mud hammer performance at depth conditions.

  12. Benchmark problems in which equality plays the major role

    SciTech Connect

    Lusk, E.; Wos, L.

    1992-05-01

    We have recently heard rumors that researchers are again studying paramodulation [Wos87] in the context of strategy for its control. In part to facilitate such research, and in part to provide test problems for evaluating other approaches to equality-oriented reasoning, we offer in this article a set of benchmark problems in which equality plays the dominant role. The test problems are taken from group theory, Robbins algebra, combinatory logic, and other areas. For each problem, we include appropriate clauses and comment as to its status with regard to provability by an unaided automated reasoning program.

  13. Benchmark problems in which equality plays the major role

    SciTech Connect

    Lusk, E.; Wos, L.

    1992-01-01

    We have recently heard rumors that researchers are again studying paramodulation (Wos87) in the context of strategy for its control. In part to facilitate such research, and in part to provide test problems for evaluating other approaches to equality-oriented reasoning, we offer in this article a set of benchmark problems in which equality plays the dominant role. The test problems are taken from group theory, Robbins algebra, combinatory logic, and other areas. For each problem, we include appropriate clauses and comment as to its status with regard to provability by an unaided automated reasoning program.

  14. Expectations for the methodology and translation of animal research: a survey of the general public, medical students and animal researchers in North America.

    PubMed

    Joffe, Ari R; Bara, Meredith; Anton, Natalie; Nobis, Nathan

    2016-09-01

    To determine what are considered acceptable standards for animal research (AR) methodology and translation rate to humans, a validated survey was sent to: a) a sample of the general public, via Sampling Survey International (SSI; Canada), Amazon Mechanical Turk (AMT; USA), a Canadian city festival (CF) and a Canadian children's hospital (CH); b) a sample of medical students (two first-year classes); and c) a sample of scientists (corresponding authors and academic paediatricians). There were 1379 responses from the general public sample (SSI, n = 557; AMT, n = 590; CF, n = 195; CH, n = 102), 205/330 (62%) medical student responses, and 23/323 (7%, too few to report) scientist responses. Asked about methodological quality, most of the general public and medical student respondents expect that: AR is of high quality (e.g. anaesthesia and analgesia are monitored, even overnight, and 'humane' euthanasia, optimal statistical design, comprehensive literature review, randomisation and blinding, are performed), and costs and difficulty are not acceptable justifications for lower quality (e.g. costs of expert consultation, or more laboratory staff). Asked about their expectations of translation to humans (of toxicity, carcinogenicity, teratogenicity and treatment findings), most expect translation more than 60% of the time. If translation occurred less than 20% of the time, a minority disagreed that this would "significantly reduce your support for AR". Medical students were more supportive of AR, even if translation occurred less than 20% of the time. Expectations for AR are much higher than empirical data show to have been achieved.

  15. 42 CFR 440.330 - Benchmark health benefits coverage.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 4 2010-10-01 2010-10-01 false Benchmark health benefits coverage. 440.330 Section 440.330 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is...

  16. 42 CFR 440.330 - Benchmark health benefits coverage.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 42 Public Health 4 2013-10-01 2013-10-01 false Benchmark health benefits coverage. 440.330 Section 440.330 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is...

  17. 42 CFR 440.330 - Benchmark health benefits coverage.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 42 Public Health 4 2014-10-01 2014-10-01 false Benchmark health benefits coverage. 440.330 Section 440.330 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is...

  18. 42 CFR 440.330 - Benchmark health benefits coverage.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 42 Public Health 4 2012-10-01 2012-10-01 false Benchmark health benefits coverage. 440.330 Section 440.330 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is...

  19. 42 CFR 440.330 - Benchmark health benefits coverage.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 42 Public Health 4 2011-10-01 2011-10-01 false Benchmark health benefits coverage. 440.330 Section 440.330 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is...

  20. Nanomagnet Logic: Architectures, design, and benchmarking

    NASA Astrophysics Data System (ADS)

    Kurtz, Steven J.

    Nanomagnet Logic (NML) is an emerging technology being studied as a possible replacement or supplementary device for Complimentary Metal-Oxide-Semiconductor (CMOS) Field-Effect Transistors (FET) by the year 2020. NML devices offer numerous potential advantages including: low energy operation, steady state non-volatility, radiation hardness and a clear path to fabrication and integration with CMOS. However, maintaining both low-energy operation and non-volatility while scaling from the device to the architectural level is non-trivial as (i) nearest neighbor interactions within NML circuits complicate the modeling of ensemble nanomagnet behavior and (ii) the energy intensive clock structures required for re-evaluation and NML's relatively high latency challenge its ability to offer system-level performance wins against other emerging nanotechnologies. Thus, further research efforts are required to model more complex circuits while also identifying circuit design techniques that balance low-energy operation with steady state non-volatility. In addition, further work is needed to design and model low-power on-chip clocks while simultaneously identifying application spaces where NML systems (including clock overhead) offer sufficient energy savings to merit their inclusion in future processors. This dissertation presents research advancing the understanding and modeling of NML at all levels including devices, circuits, and line clock structures while also benchmarking NML against both scaled CMOS and tunneling FETs (TFET) devices. This is accomplished through the development of design tools and methodologies for (i) quantifying both energy and stability in NML circuits and (ii) evaluating line-clocked NML system performance. The application of these newly developed tools improves the understanding of ideal design criteria (i.e., magnet size, clock wire geometry, etc.) for NML architectures. Finally, the system-level performance evaluation tool offers the ability to

  1. Storage-Intensive Supercomputing Benchmark Study

    SciTech Connect

    Cohen, J; Dossa, D; Gokhale, M; Hysom, D; May, J; Pearce, R; Yoo, A

    2007-10-30

    Critical data science applications requiring frequent access to storage perform poorly on today's computing architectures. This project addresses efficient computation of data-intensive problems in national security and basic science by exploring, advancing, and applying a new form of computing called storage-intensive supercomputing (SISC). Our goal is to enable applications that simply cannot run on current systems, and, for a broad range of data-intensive problems, to deliver an order of magnitude improvement in price/performance over today's data-intensive architectures. This technical report documents much of the work done under LDRD 07-ERD-063 Storage Intensive Supercomputing during the period 05/07-09/07. The following chapters describe: (1) a new file I/O monitoring tool iotrace developed to capture the dynamic I/O profiles of Linux processes; (2) an out-of-core graph benchmark for level-set expansion of scale-free graphs; (3) an entity extraction benchmark consisting of a pipeline of eight components; and (4) an image resampling benchmark drawn from the SWarp program in the LSST data processing pipeline. The performance of the graph and entity extraction benchmarks was measured in three different scenarios: data sets residing on the NFS file server and accessed over the network; data sets stored on local disk; and data sets stored on the Fusion I/O parallel NAND Flash array. The image resampling benchmark compared performance of software-only to GPU-accelerated. In addition to the work reported here, an additional text processing application was developed that used an FPGA to accelerate n-gram profiling for language classification. The n-gram application will be presented at SC07 at the High Performance Reconfigurable Computing Technologies and Applications Workshop. The graph and entity extraction benchmarks were run on a Supermicro server housing the NAND Flash 40GB parallel disk array, the Fusion-io. The Fusion system specs are as follows: SuperMicro X7

  2. Trends and Innovations in Higher Education Reform: Worldwide, Latin America and in the Caribbean. Research & Occasional Paper Series: CSHE.12.10

    ERIC Educational Resources Information Center

    Segrera, Francisco Lopez

    2010-01-01

    Universities in Latin America and in the Caribbean (LAC), and throughout the world, are facing one of the most challenging eras in their history. Globalization presents many important opportunities for higher education, but also poses serious problems and raises questions about how best to serve the common good. The traditional values of…

  3. Research Review: A Review of the "President's Committee on the Arts and the Humanities, Reinvesting in Arts Education: Winning America's Future through Creative Schools"

    ERIC Educational Resources Information Center

    Serig, Dan

    2011-01-01

    This article presents the author's review of the "President's Committee on the Arts and the Humanities, Reinvesting in Arts Education: Winning America's Future Through Creative Schools." In May 2011, the President's Committee on the Arts and the Humanities (PCAH) released a report calling for a reinvestment by communities and schools in arts…

  4. The conjugate gradient NAS parallel benchmark on the IBM SP1

    SciTech Connect

    Trefethen, A.E.; Zhang, T.

    1994-12-31

    The NAS Parallel Benchmarks are a suite of eight benchmark problems developed at the NASA Ames Research Center. They are specified in such a way that the benchmarkers are free to choose the language and method of implementation to suit the system in which they are interested. In this presentation the authors will discuss the Conjugate Gradient benchmark and its implementation on the IBM SP1. The SP1 is a parallel system which is comprised of RS/6000 nodes connected by a high performance switch. They will compare the results of the SP1 implementation with those reported for other machines. At this time, such a comparison shows the SP1 to be very competitive.

  5. Performance benchmarking of liver CT image segmentation and volume estimation

    NASA Astrophysics Data System (ADS)

    Xiong, Wei; Zhou, Jiayin; Tian, Qi; Liu, Jimmy J.; Qi, Yingyi; Leow, Wee Kheng; Han, Thazin; Wang, Shih-chang

    2008-03-01

    In recent years more and more computer aided diagnosis (CAD) systems are being used routinely in hospitals. Image-based knowledge discovery plays important roles in many CAD applications, which have great potential to be integrated into the next-generation picture archiving and communication systems (PACS). Robust medical image segmentation tools are essentials for such discovery in many CAD applications. In this paper we present a platform with necessary tools for performance benchmarking for algorithms of liver segmentation and volume estimation used for liver transplantation planning. It includes an abdominal computer tomography (CT) image database (DB), annotation tools, a ground truth DB, and performance measure protocols. The proposed architecture is generic and can be used for other organs and imaging modalities. In the current study, approximately 70 sets of abdominal CT images with normal livers have been collected and a user-friendly annotation tool is developed to generate ground truth data for a variety of organs, including 2D contours of liver, two kidneys, spleen, aorta and spinal canal. Abdominal organ segmentation algorithms using 2D atlases and 3D probabilistic atlases can be evaluated on the platform. Preliminary benchmark results from the liver segmentation algorithms which make use of statistical knowledge extracted from the abdominal CT image DB are also reported. We target to increase the CT scans to about 300 sets in the near future and plan to make the DBs built available to medical imaging research community for performance benchmarking of liver segmentation algorithms.

  6. Adding Fault Tolerance to NPB Benchmarks Using ULFM

    SciTech Connect

    Parchman, Zachary W; Vallee, Geoffroy R; Naughton III, Thomas J; Engelmann, Christian; Bernholdt, David E; Scott, Stephen L

    2016-01-01

    In the world of high-performance computing, fault tolerance and application resilience are becoming some of the primary concerns because of increasing hardware failures and memory corruptions. While the research community has been investigating various options, from system-level solutions to application-level solutions, standards such as the Message Passing Interface (MPI) are also starting to include such capabilities. The current proposal for MPI fault tolerant is centered around the User-Level Failure Mitigation (ULFM) concept, which provides means for fault detection and recovery of the MPI layer. This approach does not address application-level recovery, which is currently left to application developers. In this work, we present a mod- ification of some of the benchmarks of the NAS parallel benchmark (NPB) to include support of the ULFM capabilities as well as application-level strategies and mechanisms for application-level failure recovery. As such, we present: (i) an application-level library to checkpoint and restore data, (ii) extensions of NPB benchmarks for fault tolerance based on different strategies, (iii) a fault injection tool, and (iv) some preliminary results that show the impact of such fault tolerant strategies on the application execution.

  7. The Concepts "Benchmarks and Benchmarking" Used in Education Planning: Teacher Education as Example

    ERIC Educational Resources Information Center

    Steyn, H. J.

    2015-01-01

    Planning in education is a structured activity that includes several phases and steps that take into account several kinds of information (Steyn, Steyn, De Waal & Wolhuter, 2002: 146). One of the sets of information that are usually considered is the (so-called) "benchmarks" and "benchmarking" regarding the focus of a…

  8. GeoCorps America

    NASA Astrophysics Data System (ADS)

    Dawson, M.

    2011-12-01

    GeoCorps America, a program of the Geological Society of America's (GSA) Education and Outreach Department, provides short-term geoscience jobs in America's most amazing public lands. These jobs are hosted on federal lands managed by GeoCorps' three partner agencies: the National Park Service (NPS), the U.S. Forest Service (USFS), and the Bureau of Land Management (BLM). Agency staff submit to GSA position descriptions that help meet their geoscience needs. GSA advertises the positions online, recruits applicants from its 24,000+ members, and coordinates the placement of the candidates selected by agency staff. The typical GeoCorps position lasts for three months, pays a stipend of $2,750, and provides either free housing or a housing allowance. Some GeoCorps positions are classified as "Guest Scientist" positions, which generally last longer, involve larger payments, and require a higher level of expertise. Most GeoCorps positions occur during the spring/summer, but an increasing number of positions are being offered during the fall/winter. GeoCorps positions are open to geoscientists of all levels, from undergraduates through retired professionals. GeoCorps projects involve field and laboratory-based geoscience research, but some projects focus on developing educational programs and materials for staff, volunteers, and the public. The subject areas covered by GeoCorps projects include geology, hydrology, paleontology, mapping/GIS, soils, geo-hazards, cave/karst science, and more. GeoCorps positions have taken place at over 125 different locations nationwide, including Grand Canyon National Park, Sierra National Forest, and Craters of the Moon National Monument. In 2011, GeoCorps began offering GeoCorps Diversity Internships and GeoCorps American Indian Internships. The introduction of these programs doubled the level of diversity among GeoCorps participants. This increase in diversity is helping GSA and its partner agencies in meeting its mutual goal of

  9. Toxicological benchmarks for wildlife: 1994 Revision

    SciTech Connect

    Opresko, D.M.; Sample, B.E.; Suter, G.W. II

    1994-09-01

    The process by which ecological risks of environmental contaminants are evaluated is two-tiered. The first tier is a screening assessment where concentrations of contaminants in the environment are compared to toxicological benchmarks which represent concentrations of chemicals in environmental media (water, sediment, soil, food, etc.) that are presumed to be nonhazardous to the surrounding biota. The second tier is a baseline ecological risk assessment where toxicological benchmarks are one of several lines of evidence used to support or refute the presence of ecological effects. The report presents toxicological benchmarks for assessment of effects of 76 chemicals on 8 representative mammalian wildlife species and 31 chemicals on 9 avian wildlife species. The chemicals are some of those that occur at United States Department of Energy waste sites; the wildlife species were chosen because they are widely distributed and provide a representative range of body sizes and diets. Further descriptions of the chosen wildlife species and chemicals are provided in the report. The benchmarks presented in this report represent values believed to be nonhazardous for the listed wildlife species. These benchmarks only consider contaminant exposure through oral ingestion of contaminated media; exposure through inhalation or direct dermal exposure are not considered in this report.

  10. Building America House Simulation Protocols

    SciTech Connect

    Hendron, Robert; Engebrecht, Cheryn

    2010-09-01

    The House Simulation Protocol document was developed to track and manage progress toward Building America's multi-year, average whole-building energy reduction research goals for new construction and existing homes, using a consistent analytical reference point. This report summarizes the guidelines for developing and reporting these analytical results in a consistent and meaningful manner for all home energy uses using standard operating conditions.

  11. Collected notes from the Benchmarks and Metrics Workshop

    NASA Technical Reports Server (NTRS)

    Drummond, Mark E.; Kaelbling, Leslie P.; Rosenschein, Stanley J.

    1991-01-01

    In recent years there has been a proliferation of proposals in the artificial intelligence (AI) literature for integrated agent architectures. Each architecture offers an approach to the general problem of constructing an integrated agent. Unfortunately, the ways in which one architecture might be considered better than another are not always clear. There has been a growing realization that many of the positive and negative aspects of an architecture become apparent only when experimental evaluation is performed and that to progress as a discipline, we must develop rigorous experimental methods. In addition to the intrinsic intellectual interest of experimentation, rigorous performance evaluation of systems is also a crucial practical concern to our research sponsors. DARPA, NASA, and AFOSR (among others) are actively searching for better ways of experimentally evaluating alternative approaches to building intelligent agents. One tool for experimental evaluation involves testing systems on benchmark tasks in order to assess their relative performance. As part of a joint DARPA and NASA funded project, NASA-Ames and Teleos Research are carrying out a research effort to establish a set of benchmark tasks and evaluation metrics by which the performance of agent architectures may be determined. As part of this project, we held a workshop on Benchmarks and Metrics at the NASA Ames Research Center on June 25, 1990. The objective of the workshop was to foster early discussion on this important topic. We did not achieve a consensus, nor did we expect to. Collected here is some of the information that was exchanged at the workshop. Given here is an outline of the workshop, a list of the participants, notes taken on the white-board during open discussions, position papers/notes from some participants, and copies of slides used in the presentations.

  12. Design and Application of a Community Land Benchmarking System for Earth System Models

    NASA Astrophysics Data System (ADS)

    Mu, M.; Hoffman, F. M.; Lawrence, D. M.; Riley, W. J.; Keppel-Aleks, G.; Koven, C. D.; Kluzek, E. B.; Mao, J.; Randerson, J. T.

    2015-12-01

    Benchmarking has been widely used to assess the ability of climate models to capture the spatial and temporal variability of observations during the historical era. For the carbon cycle and terrestrial ecosystems, the design and development of an open-source community platform has been an important goal as part of the International Land Model Benchmarking (ILAMB) project. Here we developed a new benchmarking software system that enables the user to specify the models, benchmarks, and scoring metrics, so that results can be tailored to specific model intercomparison projects. Evaluation data sets included soil and aboveground carbon stocks, fluxes of energy, carbon and water, burned area, leaf area, and climate forcing and response variables. We used this system to evaluate simulations from the 5th Phase of the Coupled Model Intercomparison Project (CMIP5) with prognostic atmospheric carbon dioxide levels over the period from 1850 to 2005 (i.e., esmHistorical simulations archived on the Earth System Grid Federation). We found that the multi-model ensemble had a high bias in incoming solar radiation across Asia, likely as a consequence of incomplete representation of aerosol effects in this region, and in South America, primarily as a consequence of a low bias in mean annual precipitation. The reduced precipitation in South America had a larger influence on gross primary production than the high bias in incoming light, and as a consequence gross primary production had a low bias relative to the observations. Although model to model variations were large, the multi-model mean had a positive bias in atmospheric carbon dioxide that has been attributed in past work to weak ocean uptake of fossil emissions. In mid latitudes of the northern hemisphere, most models overestimate latent heat fluxes in the early part of the growing season, and underestimate these fluxes in mid-summer and early fall, whereas sensible heat fluxes show the opposite trend.

  13. Standardized benchmarking in the quest for orthologs.

    PubMed

    Altenhoff, Adrian M; Boeckmann, Brigitte; Capella-Gutierrez, Salvador; Dalquen, Daniel A; DeLuca, Todd; Forslund, Kristoffer; Huerta-Cepas, Jaime; Linard, Benjamin; Pereira, Cécile; Pryszcz, Leszek P; Schreiber, Fabian; da Silva, Alan Sousa; Szklarczyk, Damian; Train, Clément-Marie; Bork, Peer; Lecompte, Odile; von Mering, Christian; Xenarios, Ioannis; Sjölander, Kimmen; Jensen, Lars Juhl; Martin, Maria J; Muffato, Matthieu; Gabaldón, Toni; Lewis, Suzanna E; Thomas, Paul D; Sonnhammer, Erik; Dessimoz, Christophe

    2016-05-01

    Achieving high accuracy in orthology inference is essential for many comparative, evolutionary and functional genomic analyses, yet the true evolutionary history of genes is generally unknown and orthologs are used for very different applications across phyla, requiring different precision-recall trade-offs. As a result, it is difficult to assess the performance of orthology inference methods. Here, we present a community effort to establish standards and an automated web-based service to facilitate orthology benchmarking. Using this service, we characterize 15 well-established inference methods and resources on a battery of 20 different benchmarks. Standardized benchmarking provides a way for users to identify the most effective methods for the problem at hand, sets a minimum requirement for new tools and resources, and guides the development of more accurate orthology inference methods.

  14. OpenSHMEM Implementation of HPCG Benchmark

    SciTech Connect

    Powers, Sarah S; Imam, Neena

    2016-01-01

    We describe the effort to implement the HPCG benchmark using OpenSHMEM and MPI one-sided communication. Unlike the High Performance LINPACK (HPL) benchmark that places em- phasis on large dense matrix computations, the HPCG benchmark is dominated by sparse operations such as sparse matrix-vector product, sparse matrix triangular solve, and long vector operations. The MPI one-sided implementation is developed using the one-sided OpenSHMEM implementation. Pre- liminary results comparing the original MPI, OpenSHMEM, and MPI one-sided implementations on an SGI cluster, Cray XK7 and Cray XC30 are presented. The results suggest the MPI, OpenSHMEM, and MPI one-sided implementations all obtain similar overall performance but the MPI one-sided im- plementation seems to slightly increase the run time for multigrid preconditioning in HPCG on the Cray XK7 and Cray XC30.

  15. NAS Parallel Benchmarks, Multi-Zone Versions

    NASA Technical Reports Server (NTRS)

    vanderWijngaart, Rob F.; Haopiang, Jin

    2003-01-01

    We describe an extension of the NAS Parallel Benchmarks (NPB) suite that involves solving the application benchmarks LU, BT and SP on collections of loosely coupled discretization meshes. The solutions on the meshes are updated independently, but after each time step they exchange boundary value information. This strategy, which is common among structured-mesh production flow solver codes in use at NASA Ames and elsewhere, provides relatively easily exploitable coarse-grain parallelism between meshes. Since the individual application benchmarks also allow fine-grain parallelism themselves, this NPB extension, named NPB Multi-Zone (NPB-MZ), is a good candidate for testing hybrid and multi-level parallelization tools and strategies.

  16. Coral benchmarks in the center of biodiversity.

    PubMed

    Licuanan, W Y; Robles, R; Dygico, M; Songco, A; van Woesik, R

    2017-01-30

    There is an urgent need to quantify coral reef benchmarks that assess changes and recovery rates through time and serve as goals for management. Yet, few studies have identified benchmarks for hard coral cover and diversity in the center of marine diversity. In this study, we estimated coral cover and generic diversity benchmarks on the Tubbataha reefs, the largest and best-enforced no-take marine protected area in the Philippines. The shallow (2-6m) reef slopes of Tubbataha were monitored annually, from 2012 to 2015, using hierarchical sampling. Mean coral cover was 34% (σ±1.7) and generic diversity was 18 (σ±0.9) per 75m by 25m station. The southeastern leeward slopes supported on average 56% coral cover, whereas the northeastern windward slopes supported 30%, and the western slopes supported 18% coral cover. Generic diversity was more spatially homogeneous than coral cover.

  17. Benchmarking criticality safety calculations with subcritical experiments

    SciTech Connect

    Mihalczo, J.T.

    1984-06-01

    Calculation of the neutron multiplication factor at delayed criticality may be necessary for benchmarking calculations but it may not be sufficient. The use of subcritical experiments to benchmark criticality safety calculations could result in substantial savings in fuel material costs for experiments. In some cases subcritical configurations could be used to benchmark calculations where sufficient fuel to achieve delayed criticality is not available. By performing a variety of measurements with subcritical configurations, much detailed information can be obtained which can be compared directly with calculations. This paper discusses several measurements that can be performed with subcritical assemblies and presents examples that include comparisons between calculation and experiment where possible. Where not, examples from critical experiments have been used but the measurement methods could also be used for subcritical experiments.

  18. Benchmark field study of deep neutron penetration

    SciTech Connect

    Morgan, J.F.; Sale, K. ); Gold, R.; Roberts, J.H.; Preston, C.C. )

    1991-06-10

    A unique benchmark neutron field has been established at the Lawrence Livermore National Laboratory (LLNL) to study deep penetration neutron transport. At LLNL, a tandem accelerator is used to generate a monoenergetic neutron source that permits investigation of deep neutron penetration under conditions that are virtually ideal to model, namely the transport of mono-energetic neutrons through a single material in a simple geometry. General features of the Lawrence Tandem (LATAN) benchmark field are described with emphasis on neutron source characteristics and room return background. The single material chosen for the first benchmark, LATAN-1, is a steel representative of Light Water Reactor (LWR) Pressure Vessels (PV). Also included is a brief description of the Little Boy replica, a critical reactor assembly designed to mimic the radiation doses from the atomic bomb dropped on Hiroshima, and its us in neutron spectrometry. 18 refs.

  19. Outlier Benchmark Systems With Gaia Primaries

    NASA Astrophysics Data System (ADS)

    Marocco, Federico; Pinfield, David J.; Montes, David; Zapatero Osorio, Maria Rosa; Smart, Richard L.; Cook, Neil J.; Caballero, José A.; Jones, Hugh, R. A.; Lucas, Phil W.

    2016-07-01

    Benchmark systems are critical to assisting sub-stellar physics. While the known population of benchmarks hasincreased significantly in recent years, large portions of the age-metallicity parameter space remain unexplored.Gaia will expand enormously the pool of well characterized primary stars, and our simulations show that we couldpotentially have access to more than 6000 benchmark systems out to 300 pc, allowing us to whittle down thesenbsp;systems into a large sample with outlier properties that will reveal the nature of ultra-cool dwarfs in rare parameternbsp;space. In this contribution we present the preliminary results from our effort to identify and characterize ultra-coolnbsp;companions to Gaia-imaged stars with unusual values of metallicity. Since these systems are intrinsically rare, wenbsp;expand the volume probed by targeting faint, low-proper motion systems.nbsp;/p>

  20. Ensuring America's Future by Increasing Latino College Completion: Latino College Completion in 50 States. Executive Summary

    ERIC Educational Resources Information Center

    Santiago, Deborah; Soliz, Megan

    2012-01-01

    In 2009, Excelencia in Education launched the Ensuring America's Future initiative to inform, organize, and engage leaders in a tactical plan to increase Latino college completion. This initiative included the release of a benchmarking guide for projections of degree attainment disaggregated by race/ethnicity that offered multiple metrics to track…

  1. Toxicological benchmarks for wildlife: 1996 Revision

    SciTech Connect

    Sample, B.E.; Opresko, D.M.; Suter, G.W., II

    1996-06-01

    The purpose of this report is to present toxicological benchmarks for assessment of effects of certain chemicals on mammalian and avian wildlife species. Publication of this document meets a milestone for the Environmental Restoration (ER) Risk Assessment Program. This document provides the ER Program with toxicological benchmarks that may be used as comparative tools in screening assessments as well as lines of evidence to support or refute the presence of ecological effects in ecological risk assessments. The chemicals considered in this report are some that occur at US DOE waste sites, and the wildlife species evaluated herein were chosen because they represent a range of body sizes and diets.

  2. Benchmarking processes for managing large international space programs

    NASA Technical Reports Server (NTRS)

    Mandell, Humboldt C., Jr.; Duke, Michael B.

    1993-01-01

    The relationship between management style and program costs is analyzed to determine the feasibility of financing large international space missions. The incorporation of management systems is considered to be essential to realizing low cost spacecraft and planetary surface systems. Several companies ranging from large Lockheed 'Skunk Works' to small companies including Space Industries, Inc., Rocket Research Corp., and Orbital Sciences Corp. were studied. It is concluded that to lower the prices, the ways in which spacecraft and hardware are developed must be changed. Benchmarking of successful low cost space programs has revealed a number of prescriptive rules for low cost managements, including major changes in the relationships between the public and private sectors.

  3. School Culture Benchmarks: Bridges and Barriers to Successful Bullying Prevention Program Implementation

    ERIC Educational Resources Information Center

    Coyle, H. Elizabeth

    2008-01-01

    A substantial body of research indicates that positive school culture benchmarks are integrally tied to the success of school reform and change in general. Additionally, an emerging body of research suggests a similar role for school culture in effective implementation of school violence prevention and intervention efforts. However, little…

  4. Helping America's Youth

    ERIC Educational Resources Information Center

    Bush, Laura

    2005-01-01

    As First Lady of the United States, Laura Bush is leading the Helping America's Youth initiative of the federal government. She articulates the goal of enlisting public and volunteer resources to foster healthy growth by early intervention and mentoring of youngsters at risk. Helping America's Youth will benefit children and teenagers by…

  5. Africans in America.

    ERIC Educational Resources Information Center

    Hart, Ayanna; Spangler, Earl

    This book introduces African-American history and culture to children. The first Africans in America came from many different regions and cultures, but became united in this country by being black, African, and slaves. Once in America, Africans began a long struggle for freedom which still continues. Slavery, the Civil War, emancipation, and the…

  6. Iraq: Politics, Elections, and Benchmarks

    DTIC Science & Technology

    2009-11-18

    infighting, particularly over the status of the province of Kirkuk ( Tamim ) and over the voting mechanism delayed the National Assembly’s passage of the...Congressional Research Service 2 Kirkuk ( Tamim province) will join the Kurdish region (Article 140); designation of Islam as “a main source” of...controlling its own oil resources, disputes over security control over areas inhabited by Kurds, and the Kurds’ claim that the province of Tamim

  7. Building America Top Innovations 2012: Vapor Retarder Classification

    SciTech Connect

    none,

    2013-01-01

    This Building America Top Innovations profile describes research in vapor retarders. Since 2006 the IRC has permitted Class III vapor retarders like latex paint (see list above) in all climate zones under certain conditions thanks to research by Building America teams.

  8. Overview of TPC Benchmark E: The Next Generation of OLTP Benchmarks

    NASA Astrophysics Data System (ADS)

    Hogan, Trish

    Set to replace the aging TPC-C, the TPC Benchmark E is the next generation OLTP benchmark, which more accurately models client database usage. TPC-E addresses the shortcomings of TPC-C. It has a much more complex workload, requires the use of RAID-protected storage, generates much less I/O, and is much cheaper and easier to set up, run, and audit. After a period of overlap, it is expected that TPC-E will become the de facto OLTP benchmark.

  9. Benchmarking 2011: Trends in Education Philanthropy

    ERIC Educational Resources Information Center

    Grantmakers for Education, 2011

    2011-01-01

    The analysis in "Benchmarking 2011" is based on data from an unduplicated sample of 184 education grantmaking organizations--approximately two-thirds of Grantmakers for Education's (GFE's) network of grantmakers--who responded to an online survey consisting of fixed-choice and open-ended questions. Because a different subset of funders elects to…

  10. What Is the Impact of Subject Benchmarking?

    ERIC Educational Resources Information Center

    Pidcock, Steve

    2006-01-01

    The introduction of subject benchmarking led to fears of increased external intervention in the activities of universities and a more restrictive view of institutional autonomy, accompanied by an undermining of the academic profession, particularly through the perceived threat of the introduction of a national curriculum for higher education. For…

  11. MHEC Survey Establishes Midwest Property Insurance Benchmarks.

    ERIC Educational Resources Information Center

    Midwestern Higher Education Commission Risk Management Institute Research Bulletin, 1994

    1994-01-01

    This publication presents the results of a survey of over 200 midwestern colleges and universities on their property insurance programs and establishes benchmarks to help these institutions evaluate their insurance programs. Findings included the following: (1) 51 percent of respondents currently purchase their property insurance as part of a…

  12. Benchmark Generation and Simulation at Extreme Scale

    SciTech Connect

    Lagadapati, Mahesh; Mueller, Frank; Engelmann, Christian

    2016-01-01

    The path to extreme scale high-performance computing (HPC) poses several challenges related to power, performance, resilience, productivity, programmability, data movement, and data management. Investigating the performance of parallel applications at scale on future architectures and the performance impact of different architectural choices is an important component of HPC hardware/software co-design. Simulations using models of future HPC systems and communication traces from applications running on existing HPC systems can offer an insight into the performance of future architectures. This work targets technology developed for scalable application tracing of communication events. It focuses on extreme-scale simulation of HPC applications and their communication behavior via lightweight parallel discrete event simulation for performance estimation and evaluation. Instead of simply replaying a trace within a simulator, this work promotes the generation of a benchmark from traces. This benchmark is subsequently exposed to simulation using models to reflect the performance characteristics of future-generation HPC systems. This technique provides a number of benefits, such as eliminating the data intensive trace replay and enabling simulations at different scales. The presented work features novel software co-design aspects, combining the ScalaTrace tool to generate scalable trace files, the ScalaBenchGen tool to generate the benchmark, and the xSim tool to assess the benchmark characteristics within a simulator.

  13. Standardised Benchmarking in the Quest for Orthologs

    PubMed Central

    Altenhoff, Adrian M.; Boeckmann, Brigitte; Capella-Gutierrez, Salvador; Dalquen, Daniel A.; DeLuca, Todd; Forslund, Kristoffer; Huerta-Cepas, Jaime; Linard, Benjamin; Pereira, Cécile; Pryszcz, Leszek P.; Schreiber, Fabian; Sousa da Silva, Alan; Szklarczyk, Damian; Train, Clément-Marie; Bork, Peer; Lecompte, Odile; von Mering, Christian; Xenarios, Ioannis; Sjölander, Kimmen; Juhl Jensen, Lars; Martin, Maria J.; Muffato, Matthieu; Gabaldón, Toni; Lewis, Suzanna E.; Thomas, Paul D.; Sonnhammer, Erik; Dessimoz, Christophe

    2016-01-01

    The identification of evolutionarily related genes across different species—orthologs in particular—forms the backbone of many comparative, evolutionary, and functional genomic analyses. Achieving high accuracy in orthology inference is thus essential. Yet the true evolutionary history of genes, required to ascertain orthology, is generally unknown. Furthermore, orthologs are used for very different applications across different phyla, with different requirements in terms of the precision-recall trade-off. As a result, assessing the performance of orthology inference methods remains difficult for both users and method developers. Here, we present a community effort to establish standards in orthology benchmarking and facilitate orthology benchmarking through an automated web-based service (http://orthology.benchmarkservice.org). Using this new service, we characterise the performance of 15 well-established orthology inference methods and resources on a battery of 20 different benchmarks. Standardised benchmarking provides a way for users to identify the most effective methods for the problem at hand, sets a minimal requirement for new tools and resources, and guides the development of more accurate orthology inference methods. PMID:27043882

  14. Benchmarking Year Five Students' Reading Abilities

    ERIC Educational Resources Information Center

    Lim, Chang Kuan; Eng, Lin Siew; Mohamed, Abdul Rashid

    2014-01-01

    Reading and understanding a written text is one of the most important skills in English learning.This study attempts to benchmark Year Five students' reading abilities of fifteen rural schools in a district in Malaysia. The objectives of this study are to develop a set of standardised written reading comprehension and a set of indicators to inform…

  15. A MULTIMODEL APPROACH FOR CALCULATING BENCHMARK DOSE

    EPA Science Inventory


    A Multimodel Approach for Calculating Benchmark Dose
    Ramon I. Garcia and R. Woodrow Setzer

    In the assessment of dose response, a number of plausible dose- response models may give fits that are consistent with the data. If no dose response formulation had been speci...

  16. Cleanroom Energy Efficiency: Metrics and Benchmarks

    SciTech Connect

    International SEMATECH Manufacturing Initiative; Mathew, Paul A.; Tschudi, William; Sartor, Dale; Beasley, James

    2010-07-07

    Cleanrooms are among the most energy-intensive types of facilities. This is primarily due to the cleanliness requirements that result in high airflow rates and system static pressures, as well as process requirements that result in high cooling loads. Various studies have shown that there is a wide range of cleanroom energy efficiencies and that facility managers may not be aware of how energy efficient their cleanroom facility can be relative to other cleanroom facilities with the same cleanliness requirements. Metrics and benchmarks are an effective way to compare one facility to another and to track the performance of a given facility over time. This article presents the key metrics and benchmarks that facility managers can use to assess, track, and manage their cleanroom energy efficiency or to set energy efficiency targets for new construction. These include system-level metrics such as air change rates, air handling W/cfm, and filter pressure drops. Operational data are presented from over 20 different cleanrooms that were benchmarked with these metrics and that are part of the cleanroom benchmark dataset maintained by Lawrence Berkeley National Laboratory (LBNL). Overall production efficiency metrics for cleanrooms in 28 semiconductor manufacturing facilities in the United States and recorded in the Fabs21 database are also presented.

  17. Benchmarking in Universities: League Tables Revisited

    ERIC Educational Resources Information Center

    Turner, David

    2005-01-01

    This paper examines the practice of benchmarking universities using a "league table" approach. Taking the example of the "Sunday Times University League Table", the author reanalyses the descriptive data on UK universities. Using a linear programming technique, data envelope analysis (DEA), the author uses the re-analysis to…

  18. Algorithm and Architecture Independent Benchmarking with SEAK

    SciTech Connect

    Tallent, Nathan R.; Manzano Franco, Joseph B.; Gawande, Nitin A.; Kang, Seung-Hwa; Kerbyson, Darren J.; Hoisie, Adolfy; Cross, Joseph

    2016-05-23

    Many applications of high performance embedded computing are limited by performance or power bottlenecks. We have designed the Suite for Embedded Applications & Kernels (SEAK), a new benchmark suite, (a) to capture these bottlenecks in a way that encourages creative solutions; and (b) to facilitate rigorous, objective, end-user evaluation for their solutions. To avoid biasing solutions toward existing algorithms, SEAK benchmarks use a mission-centric (abstracted from a particular algorithm) and goal-oriented (functional) specification. To encourage solutions that are any combination of software or hardware, we use an end-user black-box evaluation that can capture tradeoffs between performance, power, accuracy, size, and weight. The tradeoffs are especially informative for procurement decisions. We call our benchmarks future proof because each mission-centric interface and evaluation remains useful despite shifting algorithmic preferences. It is challenging to create both concise and precise goal-oriented specifications for mission-centric problems. This paper describes the SEAK benchmark suite and presents an evaluation of sample solutions that highlights power and performance tradeoffs.

  19. Benchmarking 2010: Trends in Education Philanthropy

    ERIC Educational Resources Information Center

    Bearman, Jessica

    2010-01-01

    "Benchmarking 2010" offers insights into the current priorities, practices and concerns of education grantmakers. The report is divided into five sections: (1) Mapping the Education Grantmaking Landscape; (2) 2010 Funding Priorities; (3) Strategies for Leveraging Greater Impact; (4) Identifying Significant Trends in Education Funding; and (5)…

  20. Seven Benchmarks for Information Technology Investment.

    ERIC Educational Resources Information Center

    Smallen, David; Leach, Karen

    2002-01-01

    Offers benchmarks to help campuses evaluate their efforts in supplying information technology (IT) services. The first three help understand the IT budget, the next three provide insight into staffing levels and emphases, and the seventh relates to the pervasiveness of institutional infrastructure. (EV)

  1. Benchmarking Peer Production Mechanisms, Processes & Practices

    ERIC Educational Resources Information Center

    Fischer, Thomas; Kretschmer, Thomas

    2008-01-01

    This deliverable identifies key approaches for quality management in peer production by benchmarking peer production practices and processes in other areas. (Contains 29 footnotes, 13 figures and 2 tables.)[This report has been authored with contributions of: Kaisa Honkonen-Ratinen, Matti Auvinen, David Riley, Jose Pinzon, Thomas Fischer, Thomas…

  2. America = Las Americas. Canada, United States, Mexico.

    ERIC Educational Resources Information Center

    Toro, Leonor; And Others

    Written for teachers to use with migrant children in elementary grades and to highlight the many Americas, three magazines provide historical and cultural background information on Canada, the United States, and Mexico and feature biographies of Black and Hispanic leaders. Each edition has a table of contents indicating the language--Spanish…

  3. Finding a benchmark for monitoring hospital cleanliness.

    PubMed

    Mulvey, D; Redding, P; Robertson, C; Woodall, C; Kingsmore, P; Bedwell, D; Dancer, S J

    2011-01-01

    This study evaluated three methods for monitoring hospital cleanliness. The aim was to find a benchmark that could indicate risk to patients from a contaminated environment. We performed visual monitoring, ATP bioluminescence and microbiological screening of five clinical surfaces before and after detergent-based cleaning on two wards over a four-week period. Five additional sites that were not featured in the routine domestic specification were also sampled. Measurements from all three methods were integrated and compared in order to choose appropriate levels for routine monitoring. We found that visual assessment did not reflect ATP values nor environmental contamination with microbial flora including Staphylococcus aureus and meticillin-resistant S. aureus (MRSA). There was a relationship between microbial growth categories and the proportion of ATP values exceeding a chosen benchmark but neither reliably predicted the presence of S. aureus or MRSA. ATP values were occasionally diverse. Detergent-based cleaning reduced levels of organic soil by 32% (95% confidence interval: 16-44%; P<0.001) but did not necessarily eliminate indicator staphylococci, some of which survived the cleaning process. An ATP benchmark value of 100 relative light units offered the closest correlation with microbial growth levels <2.5 cfu/cm(2) (receiver operating characteristic ROC curve sensitivity: 57%; specificity: 57%). In conclusion, microbiological and ATP monitoring confirmed environmental contamination, persistence of hospital pathogens and measured the effect on the environment from current cleaning practices. This study has provided provisional benchmarks to assist with future assessment of hospital cleanliness. Further work is required to refine practical sampling strategy and choice of benchmarks.

  4. VENUS-2 Experimental Benchmark Analysis

    SciTech Connect

    Pavlovichev, A.M.

    2001-09-28

    The VENUS critical facility is a zero power reactor located at SCK-CEN, Mol, Belgium, which for the VENUS-2 experiment utilized a mixed-oxide core with near-weapons-grade plutonium. In addition to the VENUS-2 Core, additional computational variants based on each type of fuel cycle VENUS-2 core (3.3 wt. % UO{sub 2}, 4.0 wt. % UO{sub 2}, and 2.0/2.7 wt.% MOX) were also calculated. The VENUS-2 critical configuration and cell variants have been calculated with MCU-REA, which is a continuous energy Monte Carlo code system developed at Russian Research Center ''Kurchatov Institute'' and is used extensively in the Fissile Materials Disposition Program. The calculations resulted in a k{sub eff} of 0.99652 {+-} 0.00025 and relative pin powers within 2% for UO{sub 2} pins and 3% for MOX pins of the experimental values.

  5. 42 CFR 425.602 - Establishing the benchmark.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... computing an ACO's fixed historical benchmark that is adjusted for historical growth and beneficiary... program. (2) Makes separate expenditure calculations for each of the following populations of... making up the historical benchmark, determines national growth rates and trends expenditures for...

  6. 42 CFR 425.602 - Establishing the benchmark.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... computing an ACO's fixed historical benchmark that is adjusted for historical growth and beneficiary... program. (2) Makes separate expenditure calculations for each of the following populations of... making up the historical benchmark, determines national growth rates and trends expenditures for...

  7. 42 CFR 425.602 - Establishing the benchmark.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... computing an ACO's fixed historical benchmark that is adjusted for historical growth and beneficiary... program. (2) Makes separate expenditure calculations for each of the following populations of... making up the historical benchmark, determines national growth rates and trends expenditures for...

  8. Using Benchmarking To Influence Tuition and Fee Decisions.

    ERIC Educational Resources Information Center

    Hubbell, Loren W. Loomis; Massa, Robert J.; Lapovsky, Lucie

    2002-01-01

    Discusses the use of benchmarking in managing enrollment. Using a case study, illustrates how benchmarking can help administrators develop strategies for planning and implementing admissions and pricing practices. (EV)

  9. Mexico and Central America.

    PubMed

    Bronfman, M

    1998-01-01

    This article reviews the literature on migration and HIV/AIDS in Mexico and Central America, including Belize, Costa Rica, El Salvador, Guatemala, Honduras, Mexico, Nicaragua, and Panama. Most migrants travel to the US through Mexico. US-Mexico trade agreements created opportunities for increased risk of HIV transmission. The research literature focuses on Mexico. Most countries, with the exception of Belize and Costa Rica, are sending countries. Human rights of migrants are violated in transit and at destination. Migration policies determine migration processes. The Mexican-born population in the US is about 3% of US population and 8% of Mexico's population. About 22% arrived during 1992-97, and about 500,000 are naturalized US citizens. An additional 11 million have a Mexican ethnic background. Mexican migrants are usually economically active men who had jobs before leaving and were urban people who settled in California, Texas, Illinois, and Arizona. Most Mexican migrants enter illegally. Many return to Mexico. The main paths of HIV transmission are homosexual, heterosexual, and IV-drug-injecting persons. Latino migrants frequently use prostitutes, adopt new sexual practices including anal penetration among men, greater diversity of sexual partners, and use of injectable drugs.

  10. Benchmark Simulation Model No 2: finalisation of plant layout and default control strategy.

    PubMed

    Nopens, I; Benedetti, L; Jeppsson, U; Pons, M-N; Alex, J; Copp, J B; Gernaey, K V; Rosen, C; Steyer, J-P; Vanrolleghem, P A

    2010-01-01

    The COST/IWA Benchmark Simulation Model No 1 (BSM1) has been available for almost a decade. Its primary purpose has been to create a platform for control strategy benchmarking of activated sludge processes. The fact that the research work related to the benchmark simulation models has resulted in more than 300 publications worldwide demonstrates the interest in and need of such tools within the research community. Recent efforts within the IWA Task Group on "Benchmarking of control strategies for WWTPs" have focused on an extension of the benchmark simulation model. This extension aims at facilitating control strategy development and performance evaluation at a plant-wide level and, consequently, includes both pretreatment of wastewater as well as the processes describing sludge treatment. The motivation for the extension is the increasing interest and need to operate and control wastewater treatment systems not only at an individual process level but also on a plant-wide basis. To facilitate the changes, the evaluation period has been extended to one year. A prolonged evaluation period allows for long-term control strategies to be assessed and enables the use of control handles that cannot be evaluated in a realistic fashion in the one week BSM1 evaluation period. In this paper, the finalised plant layout is summarised and, as was done for BSM1, a default control strategy is proposed. A demonstration of how BSM2 can be used to evaluate control strategies is also given.

  11. Unstructured Adaptive (UA) NAS Parallel Benchmark. Version 1.0

    NASA Technical Reports Server (NTRS)

    Feng, Huiyu; VanderWijngaart, Rob; Biswas, Rupak; Mavriplis, Catherine

    2004-01-01

    We present a complete specification of a new benchmark for measuring the performance of modern computer systems when solving scientific problems featuring irregular, dynamic memory accesses. It complements the existing NAS Parallel Benchmark suite. The benchmark involves the solution of a stylized heat transfer problem in a cubic domain, discretized on an adaptively refined, unstructured mesh.

  12. Taking Stock of Corporate Benchmarking Practices: Panacea or Pandora's Box?

    ERIC Educational Resources Information Center

    Fleisher, Craig S.; Burton, Sara

    1995-01-01

    Discusses why corporate communications/public relations (cc/pr) should be benchmarked (an approach used by cc/pr managers to demonstrate the value of their activities to skeptical organizational executives). Discusses myths about cc/pr benchmarking; types, targets, and focus of cc/pr benchmarking; a process model; and critical decisions about…

  13. 42 CFR 422.258 - Calculation of benchmarks.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 42 Public Health 3 2010-10-01 2010-10-01 false Calculation of benchmarks. 422.258 Section 422.258... and Plan Approval § 422.258 Calculation of benchmarks. (a) The term “MA area-specific non-drug monthly... the plan bids. (c) Calculation of MA regional non-drug benchmark amount. CMS calculates the...

  14. Discovering and Implementing Best Practices to Strengthen SEAs: Collaborative Benchmarking

    ERIC Educational Resources Information Center

    Building State Capacity and Productivity Center, 2013

    2013-01-01

    This paper is written for state educational agency (SEA) leaders who are considering the benefits of collaborative benchmarking, and it addresses the following questions: (1) What does benchmarking of best practices entail?; (2) How does "collaborative benchmarking" enhance the process?; (3) How do SEAs control the process so that "their" needs…

  15. Lupus Foundation of America

    MedlinePlus

    ... of America Follow @LupusOrg Email Address: ZIP / Postal Code: Spam Control Text: Please leave this field empty ... up for email updates Email Address: ZIP / Postal Code: Spam Control Text: Please leave this field empty ...

  16. More Power to America.

    ERIC Educational Resources Information Center

    Miller, Willis H.

    1979-01-01

    Surveys America's current energy situation and considers means of attaining domestic energy self-sufficiency. Information is presented on hazards of decreasing energy production, traditional energy sources, and exotic energy sources. (Author/DB)

  17. Sarcoma Foundation of America

    MedlinePlus

    ... of America | All Rights Reserved. | Terms of Use | Privacy Policy Website Design & Hosting by 270net Technologies, Inc. X - Enter Your Location - - or - Get your current location Home About Us ...

  18. Developing nanotechnology in Latin America

    PubMed Central

    Shapira, Philip

    2008-01-01

    This article investigates the development of nanotechnology in Latin America with a particular focus on Argentina, Brazil, Chile, and Uruguay. Based on data for nanotechnology research publications and patents and suggesting a framework for analyzing the development of R&D networks, we identify three potential strategies of nanotechnology research collaboration. Then, we seek to identify the balance of emphasis upon each of the three strategies by mapping the current research profile of those four countries. In general, we find that they are implementing policies and programs to develop nanotechnologies but differ in their collaboration strategies, institutional involvement, and level of development. On the other hand, we find that they coincide in having a modest industry participation in research and a low level of commercialization of nanotechnologies. PMID:21170134

  19. Developing nanotechnology in Latin America

    NASA Astrophysics Data System (ADS)

    Kay, Luciano; Shapira, Philip

    2009-02-01

    This article investigates the development of nanotechnology in Latin America with a particular focus on Argentina, Brazil, Chile, and Uruguay. Based on data for nanotechnology research publications and patents and suggesting a framework for analyzing the development of R&D networks, we identify three potential strategies of nanotechnology research collaboration. Then, we seek to identify the balance of emphasis upon each of the three strategies by mapping the current research profile of those four countries. In general, we find that they are implementing policies and programs to develop nanotechnologies but differ in their collaboration strategies, institutional involvement, and level of development. On the other hand, we find that they coincide in having a modest industry participation in research and a low level of commercialization of nanotechnologies.

  20. Building America Top Innovations 2012: Model Simulating Real Domestic Hot Water Use

    SciTech Connect

    none,

    2013-01-01

    This Building America Top Innovations profile describes Building America research that is improving domestic hot water modeling capabilities to more effectively address one of the largest energy uses in residential buildings.

  1. Building America Top Innovations 2012: Integration of HVAC System Design with Simplified Duct Distribution

    SciTech Connect

    none,

    2013-01-01

    This Building America Top Innovations profile describes work by Building America research teams who field tested simplified duct designs in hundreds of homes, confirming the performance of short compact duct runs, with supply registers near interior walls.

  2. Building America Top Innovations 2012: High-Performance Affordable Housing with Habitat for Humanity

    SciTech Connect

    none,

    2013-01-01

    This Building America Top Innovations profile describes Building America support of Habitat for Humanity including researchers who wrote Habitat construction guides and teams that have worked with affiliates on numerous field projects.

  3. Building America Top Innovations 2012: High-Performance with Solar Electric Reduced Peak Demand

    SciTech Connect

    none,

    2013-01-01

    This Building America Top Innovations profile describes Building America solar home research that has demonstrated the ability to reduce peak demand by 75%. Numerous field studies have monitored power production and system effectiveness.

  4. Building America Top Innovations 2012: Thermal Bypass Air Barriers in the 2009 International Energy Conservation Code

    SciTech Connect

    none,

    2013-01-01

    This Building America Top Innovations profile describes Building America research supporting Thermal Bypass Air Barrier requirements. Since these were adopted in the 2009 IECC, close to one million homes have been mandated to include this vitally important energy efficiency measure.

  5. Gangs in Central America

    DTIC Science & Technology

    2007-08-02

    introduced – H.R. 1645 ( Gutierrez ), S. 330 (Isakson), and S. 1348 (Reid) – that includes provisions to increase cooperation among U.S., Mexican, and...America, Colombia, and Mexico, U.S. Attorney General Alberto Gonzales stated that “the United States stands with all of our neighbors in our joint fight...deportations on Central America. Legislation in the 110th Congress The 110th Congress has considered immigration legislation – H.R. 1645 ( Gutierrez ), S

  6. Mosquitoes of Middle America.

    DTIC Science & Technology

    1976-09-30

    survey of mosquitoes in Costa Rica, 197 1. Walsh , Robert D., Aedes aegypti Eradication Program, Public Health Service.—Collections in St. Croix...At least a start has been made in nearly every major group of medical importance in Middle America: Aedes , Anopheles, Culex. Deinocerites, Haemago~us...fauna in the area covered by the project. At least a start has been made in nearl y every major group of medical importance in Middle America: Aedes

  7. Building America Industrialized Housing Partnership (BAIHP II)

    SciTech Connect

    Abernethy, Bob; Chandra, Subrato; Baden, Steven; Cummings, Jim; Cummings, Jamie; Beal, David; Chasar, David; Colon, Carlos; Dutton, Wanda; Fairey, Philip; Fonorow, Ken; Gil, Camilo; Gordon, Andrew; Hoak, David; Kerr, Ryan; Peeks, Brady; Kosar, Douglas; Hewes, Tom; Kalaghchy, Safvat; Lubliner, Mike; Martin, Eric; McIlvaine, Janet; Moyer, Neil; Liguori, Sabrina; Parker, Danny; Sherwin, John; Stroer, Dennis; Thomas-Rees, Stephanie; Daniel, Danielle; McIlvaine, Janet

    2010-11-30

    This report summarizes the work conducted by the Building America Industrialized Housing Partnership (BAIHP - www.baihp.org) during the final budget period (BP5) of our contract, January 1, 2010 to November 30, 2010. Highlights from the four previous budget periods are included for context. BAIHP is led by the Florida Solar Energy Center (FSEC) of the University of Central Florida. With over 50 Industry Partners including factory and site builders, work in BP5 was performed in six tasks areas: Building America System Research Management, Documentation and Technical Support; System Performance Evaluations; Prototype House Evaluations; Initial Community Scale Evaluations; Project Closeout, Final Review of BA Communities; and Other Research Activities.

  8. ASBench: benchmarking sets for allosteric discovery.

    PubMed

    Huang, Wenkang; Wang, Guanqiao; Shen, Qiancheng; Liu, Xinyi; Lu, Shaoyong; Geng, Lv; Huang, Zhimin; Zhang, Jian

    2015-08-01

    Allostery allows for the fine-tuning of protein function. Targeting allosteric sites is gaining increasing recognition as a novel strategy in drug design. The key challenge in the discovery of allosteric sites has strongly motivated the development of computational methods and thus high-quality, publicly accessible standard data have become indispensable. Here, we report benchmarking data for experimentally determined allosteric sites through a complex process, including a 'Core set' with 235 unique allosteric sites and a 'Core-Diversity set' with 147 structurally diverse allosteric sites. These benchmarking sets can be exploited to develop efficient computational methods to predict unknown allosteric sites in proteins and reveal unique allosteric ligand-protein interactions to guide allosteric drug design.

  9. Recommendations for Benchmarking Preclinical Studies of Nanomedicines.

    PubMed

    Dawidczyk, Charlene M; Russell, Luisa M; Searson, Peter C

    2015-10-01

    Nanoparticle-based delivery systems provide new opportunities to overcome the limitations associated with traditional small-molecule drug therapy for cancer and to achieve both therapeutic and diagnostic functions in the same platform. Preclinical trials are generally designed to assess therapeutic potential and not to optimize the design of the delivery platform. Consequently, progress in developing design rules for cancer nanomedicines has been slow, hindering progress in the field. Despite the large number of preclinical trials, several factors restrict comparison and benchmarking of different platforms, including variability in experimental design, reporting of results, and the lack of quantitative data. To solve this problem, we review the variables involved in the design of preclinical trials and propose a protocol for benchmarking that we recommend be included in in vivo preclinical studies of drug-delivery platforms for cancer therapy. This strategy will contribute to building the scientific knowledge base that enables development of design rules and accelerates the translation of new technologies.

  10. A new tool for benchmarking cardiovascular fluoroscopes.

    PubMed

    Balter, S; Heupler, F A; Lin, P J; Wondrow, M H

    2001-01-01

    This article reports the status of a new cardiovascular fluoroscopy benchmarking phantom. A joint working group of the Society for Cardiac Angiography and Interventions (SCA&I) and the National Electrical Manufacturers Association (NEMA) developed the phantom. The device was adopted as NEMA standard XR 21-2000, "Characteristics of and Test Procedures for a Phantom to Benchmark Cardiac Fluoroscopic and Photographic Performance," in August 2000. The test ensemble includes imaging field geometry, spatial resolution, low-contrast iodine detectability, working thickness range, visibility of moving targets, and phantom entrance dose. The phantom tests systems under conditions simulating normal clinical use for fluoroscopically guided invasive and interventional procedures. Test procedures rely on trained human observers.

  11. Toxicological benchmarks for wildlife. Environmental Restoration Program

    SciTech Connect

    Opresko, D.M.; Sample, B.E.; Suter, G.W.

    1993-09-01

    This report presents toxicological benchmarks for assessment of effects of 55 chemicals on six representative mammalian wildlife species (short-tailed shrew, white-footed mouse, cottontail ink, red fox, and whitetail deer) and eight avian wildlife species (American robin, woodcock, wild turkey, belted kingfisher, great blue heron, barred owl, Cooper`s hawk, and redtailed hawk) (scientific names are presented in Appendix C). These species were chosen because they are widely distributed and provide a representative range of body sizes and diets. The chemicals are some of those that occur at United States Department of Energy (DOE) waste sites. The benchmarks presented in this report are values believed to be nonhazardous for the listed wildlife species.

  12. Parton distribution benchmarking with LHC data

    NASA Astrophysics Data System (ADS)

    Ball, Richard D.; Carrazza, Stefano; Del Debbio, Luigi; Forte, Stefano; Gao, Jun; Hartland, Nathan; Huston, Joey; Nadolsky, Pavel; Rojo, Juan; Stump, Daniel; Thorne, Robert S.; Yuan, C.-P.

    2013-04-01

    We present a detailed comparison of the most recent sets of NNLO PDFs from the ABM, CT, HERAPDF, MSTW and NNPDF collaborations. We compare parton distributions at low and high scales and parton luminosities relevant for LHC phenomenology. We study the PDF dependence of LHC benchmark inclusive cross sections and differential distributions for electroweak boson and jet production in the cases in which the experimental covariance matrix is available. We quantify the agreement between data and theory by computing the χ 2 for each data set with all the various PDFs. PDF comparisons are performed consistently for common values of the strong coupling. We also present a benchmark comparison of jet production at the LHC, comparing the results from various available codes and scale settings. Finally, we discuss the implications of the updated NNLO PDF sets for the combined PDF+ α s uncertainty in the gluon fusion Higgs production cross section.

  13. Specification for the VERA Depletion Benchmark Suite

    SciTech Connect

    Kim, Kang Seog

    2015-12-17

    CASL-X-2015-1014-000 iii Consortium for Advanced Simulation of LWRs EXECUTIVE SUMMARY The CASL neutronics simulator MPACT is under development for the neutronics and T-H coupled simulation for the pressurized water reactor. MPACT includes the ORIGEN-API and internal depletion module to perform depletion calculations based upon neutron-material reaction and radioactive decay. It is a challenge to validate the depletion capability because of the insufficient measured data. One of the detoured methods to validate it is to perform a code-to-code comparison for benchmark problems. In this study a depletion benchmark suite has been developed and a detailed guideline has been provided to obtain meaningful computational outcomes which can be used in the validation of the MPACT depletion capability.

  14. Benchmark On Sensitivity Calculation (Phase III)

    SciTech Connect

    Ivanova, Tatiana; Laville, Cedric; Dyrda, James; Mennerdahl, Dennis; Golovko, Yury; Raskach, Kirill; Tsiboulia, Anatoly; Lee, Gil Soo; Woo, Sweng-Woong; Bidaud, Adrien; Patel, Amrit; Bledsoe, Keith C; Rearden, Bradley T; Gulliford, J.

    2012-01-01

    The sensitivities of the keff eigenvalue to neutron cross sections have become commonly used in similarity studies and as part of the validation algorithm for criticality safety assessments. To test calculations of the sensitivity coefficients, a benchmark study (Phase III) has been established by the OECD-NEA/WPNCS/EG UACSA (Expert Group on Uncertainty Analysis for Criticality Safety Assessment). This paper presents some sensitivity results generated by the benchmark participants using various computational tools based upon different computational methods: SCALE/TSUNAMI-3D and -1D, MONK, APOLLO2-MORET 5, DRAGON-SUSD3D and MMKKENO. The study demonstrates the performance of the tools. It also illustrates how model simplifications impact the sensitivity results and demonstrates the importance of 'implicit' (self-shielding) sensitivities. This work has been a useful step towards verification of the existing and developed sensitivity analysis methods.

  15. Assessing and benchmarking multiphoton microscopes for biologists

    PubMed Central

    Corbin, Kaitlin; Pinkard, Henry; Peck, Sebastian; Beemiller, Peter; Krummel, Matthew F.

    2017-01-01

    Multiphoton microscopy has become staple tool for tracking cells within tissues and organs due to superior depth of penetration, low excitation volumes, and reduced phototoxicity. Many factors, ranging from laser pulse width to relay optics to detectors and electronics, contribute to the overall ability of these microscopes to excite and detect fluorescence deep within tissues. However, we have found that there are few standard ways already described in the literature to distinguish between microscopes or to benchmark existing microscopes to measure the overall quality and efficiency of these instruments. Here, we discuss some simple parameters and methods that can either be used within a multiphoton facility or by a prospective purchaser to benchmark performance. This can both assist in identifying decay in microscope performance and in choosing features of a scope that are suited to experimental needs. PMID:24974026

  16. Assessing and benchmarking multiphoton microscopes for biologists.

    PubMed

    Corbin, Kaitlin; Pinkard, Henry; Peck, Sebastian; Beemiller, Peter; Krummel, Matthew F

    2014-01-01

    Multiphoton microscopy has become staple tool for tracking cells within tissues and organs due to superior depth of penetration, low excitation volumes, and reduced phototoxicity. Many factors, ranging from laser pulse width to relay optics to detectors and electronics, contribute to the overall ability of these microscopes to excite and detect fluorescence deep within tissues. However, we have found that there are few standard ways already described in the literature to distinguish between microscopes or to benchmark existing microscopes to measure the overall quality and efficiency of these instruments. Here, we discuss some simple parameters and methods that can either be used within a multiphoton facility or by a prospective purchaser to benchmark performance. This can both assist in identifying decay in microscope performance and in choosing features of a scope that are suited to experimental needs.

  17. Experimental Benchmarking of the Magnetized Friction Force

    SciTech Connect

    Fedotov, A. V.; Litvinenko, V. N.; Galnander, B.; Lofnes, T.; Ziemann, V.; Sidorin, A. O.; Smirnov, A. V.

    2006-03-20

    High-energy electron cooling, presently considered as essential tool for several applications in high-energy and nuclear physics, requires accurate description of the friction force. A series of measurements were performed at CELSIUS with the goal to provide accurate data needed for the benchmarking of theories and simulations. Some results of accurate comparison of experimental data with the friction force formulas are presented.

  18. EXPERIMENTAL BENCHMARKING OF THE MAGNETIZED FRICTION FORCE.

    SciTech Connect

    FEDOTOV, A.V.; GALNANDER, B.; LITVINENKO, V.N.; LOFNES, T.; SIDORIN, A.O.; SMIRNOV, A.V.; ZIEMANN, V.

    2005-09-18

    High-energy electron cooling, presently considered as essential tool for several applications in high-energy and nuclear physics, requires accurate description of the friction force. A series of measurements were performed at CELSIUS with the goal to provide accurate data needed for the benchmarking of theories and simulations. Some results of accurate comparison of experimental data with the friction force formulas are presented.

  19. Data Intensive Systems (DIS) Benchmark Performance Summary

    DTIC Science & Technology

    2003-08-01

    calculated. These give a rough measure of the texture of each ROI. A gray-level co-occurrence matrix ( GLCM ) contains information about the spatial...sum and difference histograms.19 The descriptors chosen as features for this benchmark are GLCM entropy and GLCM energy, and are defined in terms of...stressmark, the relationships of pairs of pixels within a randomly generated image are measured. These features quantify the texture of the image

  20. Measurement Analysis When Benchmarking Java Card Platforms

    NASA Astrophysics Data System (ADS)

    Paradinas, Pierre; Cordry, Julien; Bouzefrane, Samia

    The advent of the Java Card standard has been a major turning point in smart card technology. With the growing acceptance of this standard, understanding the performance behaviour of these platforms is becoming crucial. To meet this need, we present in this paper, a benchmark framework that enables performance evaluation at the bytecode level. This paper focuses on the validity of our time measurements on smart cards.