Gerber, Alexander; Klingelhoefer, Doris; Groneberg, David; Bundschuh, Matthias
2014-09-01
To provide a critical evaluation of quality and quantity regarding scientific efforts on antineutrophil cytoplasmic antibody (ANCA)-associated vasculitides (AAV) during the past 20 years. Scientometric benchmark procedures, density-equalizing mapping and large-scale data analysis were used to visualize bi- and multilateral research cooperation and institutional collaborations, and to identify the most successful countries, institutions, authors and journals concerned with AAV. The USA are the most productive supplier and have established their position as center of international cooperation with 22.5% of all publications, followed by Germany, the United Kingdom, France and Japan, respectively. The most successful international cooperation proved to be the one between the USA, Germany and the UK. A distinct global pattern of research productivity and citation activity was revealed, with the USA and Germany holding both the highest h-index and the highest number of total citations, but Denmark, Sweden and the Netherlands leading with regards to the citation rate. Some large and productive countries such as Japan, China and Turkey show only a few international cooperations. The present study represents the first detailed scientometric analysis and visualization of research quality and quantity on 'ANCA- associated vasculitides'. It was shown that scientometric indicators such as h-index, citation rate and impact factor, commonly used for assessment of scientific quality, have to be seen critically due to distortion by self-citation, co-authorship and language bias. Countries with considerable numbers of patients should enhance international collaboration behavior for the benefit of international scientific and clinical progress. © 2014 Asia Pacific League of Associations for Rheumatology and Wiley Publishing Asia Pty Ltd.
The "Science of HRD Research": Reshaping HRD Research through Scientometrics
ERIC Educational Resources Information Center
Wang, Greg G.; Gilley, Jerry W.; Sun, Judy Y.
2012-01-01
We explore opportunities for assessing and advancing Human Resource Development (HRD) research through an integrative literature review of scientometric theories and methods. Known as the "science of science," scientometrics is concerned with the quantitative study of scholarly communications, disciplinary structure and assessment and measurement…
Masic, Izet
2016-01-01
The nature of performing a scientific research is a process that has several different components which consist of identifying the key research question(s), choices of scientific approach for the study and data collection, data analysis, and finally reporting on results. Generally, peer review is a series of procedures in the evaluation of a creative work or performance by other people, who work in the same or related field, with the aim of maintaining and improving the quality of work or performance in that field. The assessment of the achievement of every scientist, and thus indirectly determining his reputation in the scientific community of these publications, especially journals, is done through the so-called impact factor index. The impact factor predicts or estimates that how many annual citations article may receive after its publication. Evaluation of scientific productivity and assessment of the published articles of researchers and scientists can be made through the so-called H-index. The quality of published results of scientific work largely depends on knowledge sources that are used in the preparation, which means that it should be considered to serve the purpose and the very relevance of the information used. Scientometrics as a field of science covers all aforementioned issues, and scientometric analysis is obligatory for quality assessment of the scientific validity of published articles and other type of publications.
Masic, Izet
2016-01-01
The nature of performing a scientific research is a process that has several different components which consist of identifying the key research question(s), choices of scientific approach for the study and data collection, data analysis, and finally reporting on results. Generally, peer review is a series of procedures in the evaluation of a creative work or performance by other people, who work in the same or related field, with the aim of maintaining and improving the quality of work or performance in that field. The assessment of the achievement of every scientist, and thus indirectly determining his reputation in the scientific community of these publications, especially journals, is done through the so-called impact factor index. The impact factor predicts or estimates that how many annual citations article may receive after its publication. Evaluation of scientific productivity and assessment of the published articles of researchers and scientists can be made through the so-called H-index. The quality of published results of scientific work largely depends on knowledge sources that are used in the preparation, which means that it should be considered to serve the purpose and the very relevance of the information used. Scientometrics as a field of science covers all aforementioned issues, and scientometric analysis is obligatory for quality assessment of the scientific validity of published articles and other type of publications. PMID:26985429
Vocational Guidance and Psychology in Spain: A Scientometric Study
ERIC Educational Resources Information Center
Flores-Buils, Raquel; Gil-Beltran, Jose Manuel; Caballer-Miedes, Antonio; Martinez-Martinez, Miguel Angel
2013-01-01
Introduction: Studies that investigate research activity are possible by quantifying certain variables pertaining to articles published in specialized journals. Once quantified, numerical data are obtained that summarize characteristics of the research activity. These data are obtained through scientometric indicators. This is an objective and…
The Vocational Guidance Research Database: A Scientometric Approach
ERIC Educational Resources Information Center
Flores-Buils, Raquel; Gil-Beltran, Jose Manuel; Caballer-Miedes, Antonio; Martinez-Martinez, Miguel Angel
2012-01-01
The scientometric study of scientific output through publications in specialized journals cannot be undertaken exclusively with the databases available today. For this reason, the objective of this article is to introduce the "Base de Datos de Investigacion en Orientacion Vocacional" [Vocational Guidance Research Database], based on the…
ERIC Educational Resources Information Center
Ivancheva, Ludmila E.
2001-01-01
Discusses the concept of the hyperbolic or skew distribution as a universal statistical law in information science and socioeconomic studies. Topics include Zipf's law; Stankov's universal law; non-Gaussian distributions; and why most bibliometric and scientometric laws reveal characters of non-Gaussian distribution. (Author/LRW)
The Earth Science Research Network as Seen Through Network Analysis of the AGU
NASA Astrophysics Data System (ADS)
Narock, T.; Hasnain, S.; Stephan, R.
2017-12-01
Scientometrics is the science of science. Scientometric research includes measurements of impact, mapping of scientific fields, and the production of indicators for use in policy and management. We have leveraged network analysis in a scientometric study of the American Geophysical Union (AGU). Data from the AGU's Linked Data Abstract Browser was used to create a visualization and analytics tools to explore the Earth science's research network. Our application applies network theory to look at network structure within the various AGU sections, identify key individuals and communities related to Earth science topics, and examine multi-disciplinary collaboration across sections. Opportunities to optimize Earth science output, as well as policy and outreach applications, are discussed.
Laying the Foundations for Scientometric Research: A Data Science Approach
ERIC Educational Resources Information Center
Perron, Brian E.; Victor, Bryan G.; Hodge, David R.; Salas-Wright, Christopher P.; Vaughn, Michael G.; Taylor, Robert Joseph
2017-01-01
Objective: Scientometric studies of social work have stagnated due to problems with the organization and structure of the disciplinary literature. This study utilized data science to produce a set of research tools to overcome these methodological challenges. Method: We constructed a comprehensive list of social work journals for a 25-year time…
A Framework for Text Mining in Scientometric Study: A Case Study in Biomedicine Publications
NASA Astrophysics Data System (ADS)
Silalahi, V. M. M.; Hardiyati, R.; Nadhiroh, I. M.; Handayani, T.; Rahmaida, R.; Amelia, M.
2018-04-01
The data of Indonesians research publications in the domain of biomedicine has been collected to be text mined for the purpose of a scientometric study. The goal is to build a predictive model that provides a classification of research publications on the potency for downstreaming. The model is based on the drug development processes adapted from the literatures. An effort is described to build the conceptual model and the development of a corpus on the research publications in the domain of Indonesian biomedicine. Then an investigation is conducted relating to the problems associated with building a corpus and validating the model. Based on our experience, a framework is proposed to manage the scientometric study based on text mining. Our method shows the effectiveness of conducting a scientometric study based on text mining in order to get a valid classification model. This valid model is mainly supported by the iterative and close interactions with the domain experts starting from identifying the issues, building a conceptual model, to the labelling, validation and results interpretation.
ERIC Educational Resources Information Center
Liu, Shuyan; Oakland, Thomas
2016-01-01
The objective of this current study is to identify the growth and development of scholarly literature that specifically references the term "school psychology" in the Science Citation Index from 1907 through 2014. Documents from Web of Science were accessed and analyzed through the use of scientometric analyses, including HistCite and…
A scientometrics and social network analysis of Malaysian research in physics
NASA Astrophysics Data System (ADS)
Tan, H. X.; Ujum, E. A.; Ratnavelu, K.
2014-03-01
This conference proceeding presents an empirical assessment on the domestic publication output and structure of scientific collaboration of Malaysian authors for the field of physics. Journal articles with Malaysian addresses for the subject area "Physics" and other sub-discipline of physics were retrieved from the Thomson Reuters Web of Knowledge database spanning the years 1980 to 2011. A scientometrics and social network analysis of the Malaysian physics field was conducted to examine the publication growth and distribution of domestic collaborative publications; the giant component analysis; and the degree, closeness, and betweenness centralisation scores for the domestic co-authorship networks. Using these methods, we are able to gain insights on the evolution of collaboration and scientometric dimensions of Malaysian research in physics over time.
Scientometric analysis and mapping of scientific articles on Behcet's disease.
Shahram, Farhad; Jamshidi, Ahmad-Reza; Hirbod-Mobarakeh, Armin; Habibi, Gholamreza; Mardani, Amir; Ghaemi, Marjan
2013-04-01
Behçet's disease (BD) is a systemic vasculitis disease with oral and genital aphthous ulceration, uveitis, skin manifestations, arthritis and neurological involvement. Many investigators have published articles on BD in the last two decades since introduction of diagnosis criteria by the International Study Group for Behçet's Disease in 1990. However, there is no scientometric analysis available for this increasing amount of literature. A scientometric analysis method was used to achieve a view of scientific articles about BD which were published between 1990 and 2010, by data retrieving from ISI Web of Science. The specific features such as publication year, language of article, geographical distribution, main journal in this field, institutional affiliation and citation characteristics were retrieved and analyzed. International collaboration was analyzed using Intcoll and Pajek softwares. There was a growing trend in the number of BD articles from 1990 to 2010. The number of citations to BD literature also increased around 5.5-fold in this period. The countries found to have the highest output were Turkey, Japan, the USA and England; the first two universities were from Turkey. Most of the top 10 journals publishing BD articles were in the field of rheumatology, consistent with the subject areas of the articles. There was a correlation between the citations per paper and the impact factor of the publishing journal. This is the first scientometric analysis of BD, showing the scientometric characteristics of ISI publications on BD. © 2013 The Authors International Journal of Rheumatic Diseases © 2013 Asia Pacific League of Associations for Rheumatology and Wiley Publishing Asia Pty Ltd.
ERIC Educational Resources Information Center
Milojevic, Staša
2013-01-01
Introduction: Disciplinarity and other forms of differentiation in science have long been studied in the fields of science and technology studies, information science and scientometrics. However, it is not obvious whether these fields are building on each other's findings. Methods: An analysis is made of 609 articles on disciplinarity…
ERIC Educational Resources Information Center
Maurer, Hermann; Khan, Muhammad Salman
2010-01-01
Purpose: The purpose of this paper is to provide a scientometric and content analysis of the studies in the field of e-learning that were published in five Social Science Citation Index (SSCI) journals ("Journal of Computer Assisted Learning, Computers & Education, British Journal of Educational Technology, Innovations in Education and Teaching…
Benchmarking: your performance measurement and improvement tool.
Senn, G F
2000-01-01
Many respected professional healthcare organizations and societies today are seeking to establish data-driven performance measurement strategies such as benchmarking. Clinicians are, however, resistant to "benchmarking" that is based on financial data alone, concerned that it may be adverse to the patients' best interests. Benchmarking of clinical procedures that uses physician's codes such as Current Procedural Terminology (CPTs) has greater credibility with practitioners. Better Performers, organizations that can perform procedures successfully at lower cost and in less time, become the "benchmark" against which other organizations can measure themselves. The Better Performers' strategies can be adopted by other facilities to save time or money while maintaining quality patient care.
Dumitrascu, Dan L
2018-01-01
There is a competition between scientific journals in order to achieve leadership in their scientific field. There are several Romanian biomedical journals which are published in English and a smaller number in Romanian. We need a periodical analysis of their visibility and ranking according to scientometric measures. We searched all biomedical journals indexed on international data bases: Web of Science, PubMed, Scopus, Embase, Google Scholar. We analyzed their evaluation factors. Several journals from Romania in the biomedical field are indexed in international databases. Their scientometric indexes are not high. The best journal was acquired by an international publisher and is no longer listed for Romania. There are several Romanian biomedical journals indexed in international databases that deserve periodical analysis. There is a need to improve their ranking.
Krampen, Günter
Examines scientometrically the trends in and the recent situation of research on and the teaching of the history of psychology in the German-speaking countries and compares the findings with the situation in other countries (mainly the United States) by means of the psychology databases PSYNDEX and PsycINFO. Declines of publications on the history of psychology are described scientometrically for both research communities since the 1990s. Some impulses are suggested for the future of research on and the teaching of the history of psychology. These include (1) the necessity and significance of an intensified use of quantitative, unobtrusive scientometric methods in historiography in times of digital "big data", (2) the necessity and possibilities to integrate qualitative and quantitative methodologies in historical research and teaching, (3) the reasonableness of interdisciplinary cooperation of specialist historians, scientometricians, and psychologists, (4) the meaningfulness and necessity to explore, investigate, and teach more intensively the past and the problem history of psychology as well as the understanding of the subject matter of psychology in its historical development in cultural contexts. The outlook on the future of such a more up-to-date research on and teaching of the history of psychology is-with some caution-positive.
Caesarean Section--A Density-Equalizing Mapping Study to Depict Its Global Research Architecture.
Brüggmann, Dörthe; Löhlein, Lena-Katharina; Louwen, Frank; Quarcoo, David; Jaque, Jenny; Klingelhöfer, Doris; Groneberg, David A
2015-11-17
Caesarean section (CS) is a common surgical procedure. Although it has been performed in a modern context for about 100 years, there is no concise analysis of the international architecture of caesarean section research output available so far. Therefore, the present study characterizes the global pattern of the related publications by using the NewQIS (New Quality and Quantity Indices in Science) platform, which combines scientometric methods with density equalizing mapping algorithms. The Web of Science was used as a database. 12,608 publications were identified that originated from 131 countries. The leading nations concerning research activity, overall citations and country-specific h-Index were the USA and the United Kingdom. Relation of the research activity to epidemiologic data indicated that Scandinavian countries including Sweden and Finland were leading the field, whereas, in relation to economic data, countries such as Israel and Ireland led. Semi-qualitative indices such as country-specific citation rates ranked Sweden, Norway and Finland in the top positions. International caesarean section research output continues to grow annually in an era where caesarean section rates increased dramatically over the past decades. With regard to increasing employment of scientometric indicators in performance assessment, these findings should provide useful information for those tasked with the improvement of scientific achievements.
Caesarean Section—A Density-Equalizing Mapping Study to Depict Its Global Research Architecture
Brüggmann, Dörthe; Löhlein, Lena-Katharina; Louwen, Frank; Quarcoo, David; Jaque, Jenny; Klingelhöfer, Doris; Groneberg, David A.
2015-01-01
Caesarean section (CS) is a common surgical procedure. Although it has been performed in a modern context for about 100 years, there is no concise analysis of the international architecture of caesarean section research output available so far. Therefore, the present study characterizes the global pattern of the related publications by using the NewQIS (New Quality and Quantity Indices in Science) platform, which combines scientometric methods with density equalizing mapping algorithms. The Web of Science was used as a database. 12,608 publications were identified that originated from 131 countries. The leading nations concerning research activity, overall citations and country-specific h-Index were the USA and the United Kingdom. Relation of the research activity to epidemiologic data indicated that Scandinavian countries including Sweden and Finland were leading the field, whereas, in relation to economic data, countries such as Israel and Ireland led. Semi-qualitative indices such as country-specific citation rates ranked Sweden, Norway and Finland in the top positions. International caesarean section research output continues to grow annually in an era where caesarean section rates increased dramatically over the past decades. With regard to increasing employment of scientometric indicators in performance assessment, these findings should provide useful information for those tasked with the improvement of scientific achievements. PMID:26593932
2014-03-01
documents (e.g., articles, books ). C ita tio n A n a ly sis Scientometrics The analysis, quantification, and measurement of science. The...the development of mechanical devices for cryptology around the early 1900s. These devices, known as rotor machines, drastically increased the...similar study, Fernandez-Alles and Ramos-Rodriguez (2009) argued that since human resources management was dominated by published books instead of journal
Scientific Production of Medical Universities in the West of Iran: a Scientometric Analysis.
Rasolabadi, Masoud; Khaledi, Shahnaz; Khayati, Fariba; Kalhor, Marya Maryam; Penjvini, Susan; Gharib, Alireza
2015-08-01
This study aimed to compare scientific production by providing quantitative evaluation of science output in five Western Iranian Medical Universities including Hamedan, Ilam, Kermanshah, Kurdistan and Lorestan University of Medical Sciences using scientometrics indicators based on data indexed in Scopus for period between the years 2010 to 2014. In this scientometric study data were collected using Scopus database. Both searching and analyzing features of Scopus were used to data retrieval and analysis. We used Scientometrics indicators including number of publications, number of citations, nationalization index (NI), Internationalization Index (INI), H-index, average number of citations per paper, and growth index. Five Western Iranian Universities produced over 3011 articles from 2010 to 2014. These articles were cited 7158 times with an average rate of 4.2 citations per article. H- Index of under study universities are varying from 14 to 30. Ilam University of Medical Sciences had the highest international collaboration with an INI of 0.33 compared to Hamedan and Kermanshah universities with INI of 0.20 and 0.16 respectively. The lowest international collaboration belonged to Lorestan University of Medical Sciences (0.07). The highest Growth Index belonged to Kurdistan University of Medical Sciences (69.7). Although scientific production of five Western Iranian Medical Universities was increasing, but this trend was not stable. To achieve better performance it is recommended that five Western Iranian Universities stabilize their budgeting and investment policies in research.
Yazdani, Kamran; Rahimi-Movaghar, Afarin; Nedjat, Saharnaz; Ghalichi, Leila; Khalili, Malahat
2015-01-01
Since Tehran University of Medical Sciences (TUMS) has the oldest and highest number of research centers among all Iranian medical universities, this study was conducted to evaluate scientific output of research centers affiliated to Tehran University of Medical Sciences (TUMS) using scientometric indices and the affecting factors. Moreover, a number of scientometric indicators were introduced. This cross-sectional study was performed to evaluate a 5-year scientific performance of research centers of TUMS. Data were collected through questionnaires, annual evaluation reports of the Ministry of Health, and also from Scopus database. We used appropriate measures of central tendency and variation for descriptive analyses. Moreover, uni-and multi-variable linear regression were used to evaluate the effect of independent factors on the scientific output of the centers. The medians of the numbers of papers and books during a 5-year period were 150.5 and 2.5 respectively. The median of the "articles per researcher" was 19.1. Based on multiple linear regression, younger age centers (p=0.001), having a separate budget line (p=0.016), and number of research personnel (p<0.001) had a direct significant correlation with the number of articles while real properties had a reverse significant correlation with it (p=0.004). The results can help policy makers and research managers to allocate sufficient resources to improve current situation of the centers. Newly adopted and effective scientometric indices are is suggested to be used to evaluate scientific outputs and functions of these centers.
[An analysis of Chilean biomedical publications in PubMed in the years 2008-2009].
Valdés S, Gloria; Pérez G, Fernanda; Reyes B, Humberto
2015-08-01
During the years 2008 and 2009, 1,191 biomedical articles authored by Chilean investigators working in Chile were indexed in PubMed. To evaluate the potential visibility of those articles, according to scientometric indexes of the journals where they were published. Those journals where the articles had been published were identified and each journals Impact Factor (JIF), 5-year JIF, SCImago Journal Rank (SJR), SCImago Quartiles (Q) for 2010 and the Source Normalized Impact per Paper (SNIP) for 2008-2009 were identified. Three hundred and twelve articles (26,2%) were dedicated to experimental studies in animals, tissues or cells and they were classified as Biomedicine, while 879 (73,8%) were classified as Clinical Medicine; in both areas the main type of articles were original reports (90% and 73.6%, respectively). Revista Médica de Chile and Revista Chilena de Infectología concentrated the greater number of publications. Articles classified in Biomedicine were published more frequently in English and in journals with higher scientometric indexes than those classified in Clinical Medicine. Biomedical articles dealing with clinical topics, particularly case reports, were published mostly in national journals or in foreign journals with low scientometric indexes. It can be partly attributable to the authors interest in reaching local readers. The evaluation of research productivity should combine several scientometric indexes, selected according to the field of research, the institution's and investigators interests, with a qualitative and multifactorial assessment.
Scientific Production of Medical Universities in the West of Iran: a Scientometric Analysis
Rasolabadi, Masoud; Khaledi, Shahnaz; Khayati, Fariba; Kalhor, Marya Maryam; Penjvini, Susan; Gharib, Alireza
2015-01-01
Introduction: This study aimed to compare scientific production by providing quantitative evaluation of science output in five Western Iranian Medical Universities including Hamedan, Ilam, Kermanshah, Kurdistan and Lorestan University of Medical Sciences using scientometrics indicators based on data indexed in Scopus for period between the years 2010 to 2014. Methods: In this scientometric study data were collected using Scopus database. Both searching and analyzing features of Scopus were used to data retrieval and analysis. We used Scientometrics indicators including number of publications, number of citations, nationalization index (NI), Internationalization Index (INI), H-index, average number of citations per paper, and growth index. Results: Five Western Iranian Universities produced over 3011 articles from 2010 to 2014. These articles were cited 7158 times with an average rate of 4.2 citations per article. H- Index of under study universities are varying from 14 to 30. Ilam University of Medical Sciences had the highest international collaboration with an INI of 0.33 compared to Hamedan and Kermanshah universities with INI of 0.20 and 0.16 respectively. The lowest international collaboration belonged to Lorestan University of Medical Sciences (0.07). The highest Growth Index belonged to Kurdistan University of Medical Sciences (69.7). Conclusion: Although scientific production of five Western Iranian Medical Universities was increasing, but this trend was not stable. To achieve better performance it is recommended that five Western Iranian Universities stabilize their budgeting and investment policies in research. PMID:26483592
Scalable randomized benchmarking of non-Clifford gates
NASA Astrophysics Data System (ADS)
Cross, Andrew; Magesan, Easwar; Bishop, Lev; Smolin, John; Gambetta, Jay
Randomized benchmarking is a widely used experimental technique to characterize the average error of quantum operations. Benchmarking procedures that scale to enable characterization of n-qubit circuits rely on efficient procedures for manipulating those circuits and, as such, have been limited to subgroups of the Clifford group. However, universal quantum computers require additional, non-Clifford gates to approximate arbitrary unitary transformations. We define a scalable randomized benchmarking procedure over n-qubit unitary matrices that correspond to protected non-Clifford gates for a class of stabilizer codes. We present efficient methods for representing and composing group elements, sampling them uniformly, and synthesizing corresponding poly (n) -sized circuits. The procedure provides experimental access to two independent parameters that together characterize the average gate fidelity of a group element. We acknowledge support from ARO under Contract W911NF-14-1-0124.
Scientometrics: Nature Index and Brazilian science.
Silva, Valter
2016-09-01
A recent published newspaper article commented on the (lack of) quality of Brazilian science and its (in) efficiency. The newspaper article was based on a special issue of Nature and on a new resource for scientometrics called Nature Index. I show here arguments and sources of bias that, under the light of the principle in dubio pro reo, it is questionable to dispute the quality and efficiency of the Brazilian science on these grounds, as it was commented on the referred article. A brief overview of Brazilian science is provided for readers to make their own judgment.
Developing a Benchmarking Process in Perfusion: A Report of the Perfusion Downunder Collaboration
Baker, Robert A.; Newland, Richard F.; Fenton, Carmel; McDonald, Michael; Willcox, Timothy W.; Merry, Alan F.
2012-01-01
Abstract: Improving and understanding clinical practice is an appropriate goal for the perfusion community. The Perfusion Downunder Collaboration has established a multi-center perfusion focused database aimed at achieving these goals through the development of quantitative quality indicators for clinical improvement through benchmarking. Data were collected using the Perfusion Downunder Collaboration database from procedures performed in eight Australian and New Zealand cardiac centers between March 2007 and February 2011. At the Perfusion Downunder Meeting in 2010, it was agreed by consensus, to report quality indicators (QI) for glucose level, arterial outlet temperature, and pCO2 management during cardiopulmonary bypass. The values chosen for each QI were: blood glucose ≥4 mmol/L and ≤10 mmol/L; arterial outlet temperature ≤37°C; and arterial blood gas pCO2 ≥ 35 and ≤45 mmHg. The QI data were used to derive benchmarks using the Achievable Benchmark of Care (ABC™) methodology to identify the incidence of QIs at the best performing centers. Five thousand four hundred and sixty-five procedures were evaluated to derive QI and benchmark data. The incidence of the blood glucose QI ranged from 37–96% of procedures, with a benchmark value of 90%. The arterial outlet temperature QI occurred in 16–98% of procedures with the benchmark of 94%; while the arterial pCO2 QI occurred in 21–91%, with the benchmark value of 80%. We have derived QIs and benchmark calculations for the management of several key aspects of cardiopulmonary bypass to provide a platform for improving the quality of perfusion practice. PMID:22730861
Yazdani, Kamran; Rahimi-Movaghar, Afarin; Nedjat, Saharnaz; Ghalichi, Leila; Khalili, Malahat
2015-01-01
Background: Since Tehran University of Medical Sciences (TUMS) has the oldest and highest number of research centers among all Iranian medical universities, this study was conducted to evaluate scientific output of research centers affiliated to Tehran University of Medical Sciences (TUMS) using scientometric indices and the affecting factors. Moreover, a number of scientometric indicators were introduced. Methods: This cross-sectional study was performed to evaluate a 5-year scientific performance of research centers of TUMS. Data were collected through questionnaires, annual evaluation reports of the Ministry of Health, and also from Scopus database. We used appropriate measures of central tendency and variation for descriptive analyses. Moreover, uni-and multi-variable linear regression were used to evaluate the effect of independent factors on the scientific output of the centers. Results: The medians of the numbers of papers and books during a 5-year period were 150.5 and 2.5 respectively. The median of the "articles per researcher" was 19.1. Based on multiple linear regression, younger age centers (p=0.001), having a separate budget line (p=0.016), and number of research personnel (p<0.001) had a direct significant correlation with the number of articles while real properties had a reverse significant correlation with it (p=0.004). Conclusion: The results can help policy makers and research managers to allocate sufficient resources to improve current situation of the centers. Newly adopted and effective scientometric indices are is suggested to be used to evaluate scientific outputs and functions of these centers. PMID:26157724
Jamali, Jamshid; Salehi-Marzijarani, Mohammad; Ayatollahi, Seyyed Mohammad Taghi
2014-12-01
Awareness of the latest scientific research and publishing articles in top journals is one of the major concerns of health researchers. In this study, we first introduced top journals of obstetrics and gynecology field based on their Impact Factor (IF), Eigenfactor Score (ES) and SCImago Journal Rank (SJR) indicator indexed in Scopus databases and then the scientometric features of longitudinal changes of SJR in this field were presented. In our analytical and bibiliometric study, we included all the journals of obstetrics and gynecology field which were indexed by Scopus from 1999 to 2013. The scientometric features in Scopus were derived from SCImago Institute and IF and ES were obtained from Journal Citation Report through the Institute for Scientific Information. Generalized Estimating Equation was used to assess the scientometric features affecting SJR. From 256 journals reviewed, 54.2% and 41.8% were indexed in the Pubmed and the Web of Sciences, respectively. Human Reproduction Update based on the IF (5.924±2.542) and SJR (2.682±1.185), and American Journal of obstetrics and gynecology based on the ES (0.05685±0.00633) obtained the first rank among the other journals. Time, Index in Pubmed, H_index, Citable per Document, Cites per Document, and IF affected changes of SJR in the period of study. Our study showed a significant association between SJR and scientometric features in obstetrics and gynecology journals. According to this relationship, SJR may be an appropriate index for assessing journal quality.
Jamali, Jamshid; Salehi-Marzijarani, Mohammad; Ayatollahi, Seyyed Mohammad Taghi
2014-01-01
Introduction: Awareness of the latest scientific research and publishing articles in top journals is one of the major concerns of health researchers. In this study, we first introduced top journals of obstetrics and gynecology field based on their Impact Factor (IF), Eigenfactor Score (ES) and SCImago Journal Rank (SJR) indicator indexed in Scopus databases and then the scientometric features of longitudinal changes of SJR in this field were presented. Method and material: In our analytical and bibiliometric study, we included all the journals of obstetrics and gynecology field which were indexed by Scopus from 1999 to 2013. The scientometric features in Scopus were derived from SCImago Institute and IF and ES were obtained from Journal Citation Report through the Institute for Scientific Information. Generalized Estimating Equation was used to assess the scientometric features affecting SJR. Result: From 256 journals reviewed, 54.2% and 41.8% were indexed in the Pubmed and the Web of Sciences, respectively. Human Reproduction Update based on the IF (5.924±2.542) and SJR (2.682±1.185), and American Journal of obstetrics and gynecology based on the ES (0.05685±0.00633) obtained the first rank among the other journals. Time, Index in Pubmed, H_index, Citable per Document, Cites per Document, and IF affected changes of SJR in the period of study. Discussion: Our study showed a significant association between SJR and scientometric features in obstetrics and gynecology journals. According to this relationship, SJR may be an appropriate index for assessing journal quality. PMID:25684846
Zhang, Yin; Wang, Lei
2013-01-01
Abstract The Clinical and Translational Science Awards (CTSA) program is one of the most important initiatives in translational medical funding. The quantitative evaluation of the efficiency and performance of the CTSA program has a significant referential meaning for the decision making of global translational medical funding. Using science mapping and scientometric analytic tools, this study quantitatively analyzed the scientific articles funded by the CTSA program. The results of the study showed that the quantitative productivities of the CTSA program had a stable increase since 2008. In addition, the emerging trends of the research funded by the CTSA program covered clinical and basic medical research fields. The academic benefits from the CTSA program were assisting its members to build a robust academic home for the Clinical and Translational Science and to attract other financial support. This study provided a quantitative evaluation of the CTSA program based on science mapping and scientometric analysis. Further research is required to compare and optimize other quantitative methods and to integrate various research results. PMID:24330689
Zhang, Yin; Wang, Lei; Diao, Tianxi
2013-12-01
The Clinical and Translational Science Awards (CTSA) program is one of the most important initiatives in translational medical funding. The quantitative evaluation of the efficiency and performance of the CTSA program has a significant referential meaning for the decision making of global translational medical funding. Using science mapping and scientometric analytic tools, this study quantitatively analyzed the scientific articles funded by the CTSA program. The results of the study showed that the quantitative productivities of the CTSA program had a stable increase since 2008. In addition, the emerging trends of the research funded by the CTSA program covered clinical and basic medical research fields. The academic benefits from the CTSA program were assisting its members to build a robust academic home for the Clinical and Translational Science and to attract other financial support. This study provided a quantitative evaluation of the CTSA program based on science mapping and scientometric analysis. Further research is required to compare and optimize other quantitative methods and to integrate various research results. © 2013 Wiley Periodicals, Inc.
Scientometric indicators for Brazilian research on High Energy Physics, 1983-2013.
Alvarez, Gonzalo R; Vanz, Samile A S; Barbosa, Marcia C
2017-01-01
This article presents an analysis of Brazilian research on High Energy Physics (HEP) indexed by Web of Science (WoS) from 1983 to 2013. Scientometric indicators for output, collaboration and impact were used to characterize the field under study. The results show that the Brazilian articles account for 3% of total HEP research worldwide and that the sharp rise in the scientific activity between 2009 and 2013 may have resulted from the consolidation of graduate programs, the increase of the funding and of the international collaboration as well as the implementation of the Rede Nacional de Física de Altas Energias (RENAFAE) in 2008. Our results also indicate that the collaboration patterns in terms of the authors, the institutions and the countries confirm the presence of Brazil in multinational Big Science experiments, which may also explain the prevalence of foreign citing documents (all types), emphasizing the international prestige and visibility of the output of Brazilian scientists. We concluded that the scientometric indicators suggested scientific maturity in the Brazilian HEP community due to its long history of experimental research.
This equilibrium partitioning sediment benchmark (ESB) document describes procedures to derive concentrations of the insecticide dieldrin in sediment which are protective of the presence of benthic organisms. The equilibrium partitioning (EqP) approach was chosen because it acco...
This equilibrium partitioning sediment benchmark (ESB) document describes procedures to derive concentrations of the insecticide endrin in sediment which are protective of the presence of benthic organisms. The equilibrium partitioning (EqP) approach was chosen because it accoun...
This equilibrium partitioning sediment benchmark (ESB) document describes procedures to derive concentrations of PAH mixtures in sediment which are protective of the presence of benthic organisms. The equilibrium partitioning (EqP) approach was chosen because it accounts for t...
Lira, Renan Bezerra; de Carvalho, André Ywata; de Carvalho, Genival Barbosa; Lewis, Carol M; Weber, Randal S; Kowalski, Luiz Paulo
2016-07-01
Quality assessment is a major tool for evaluation of health care delivery. In head and neck surgery, the University of Texas MD Anderson Cancer Center (MD Anderson) has defined quality standards by publishing benchmarks. We conducted an analysis of 360 head and neck surgeries performed at the AC Camargo Cancer Center (AC Camargo). The procedures were stratified into low-acuity procedures (LAPs) or high-acuity procedures (HAPs) and outcome indicators where compared to MD Anderson benchmarks. In the 360 cases, there were 332 LAPs (92.2%) and 28 HAPs (7.8%). Patients with any comorbid condition had a higher incidence of negative outcome indicators (p = .005). In the LAPs, we achieved the MD Anderson benchmarks in all outcome indicators. In HAPs, the rate of surgical site infection and length of hospital stay were higher than what is established by the benchmarks. Quality assessment of head and neck surgery is possible and should be disseminated, improving effectiveness in health care delivery. © 2015 Wiley Periodicals, Inc. Head Neck 38: 1002-1007, 2016. © 2015 Wiley Periodicals, Inc.
This document describes procedures to determine the concentrations of nonionic organic chemicals in sediment interstitial waters. In previous ESB documents, the general equilibrium partitioning (EqP) approach was chosen for the derivation of sediment benchmarks because it account...
This equilibrium partitioning sediment benchmark (ESB) document describes procedures to derive concentrations of metal mixtures in sediment which are protective of the presence of benthic organisms. The equilibrium partitioning (EqP) approach was chosen because it accounts for t...
This equilibrium partitioning sediment benchmark (ESB) document describes procedures to derive concentrations for 32 nonionic organic chemicals in sediment which are protective of the presence of freshwater and marine benthic organisms. The equilibrium partitioning (EqP) approach...
Liebe, J D; Hübner, U
2013-01-01
Continuous improvements of IT-performance in healthcare organisations require actionable performance indicators, regularly conducted, independent measurements and meaningful and scalable reference groups. Existing IT-benchmarking initiatives have focussed on the development of reliable and valid indicators, but less on the questions about how to implement an environment for conducting easily repeatable and scalable IT-benchmarks. This study aims at developing and trialling a procedure that meets the afore-mentioned requirements. We chose a well established, regularly conducted (inter-) national IT-survey of healthcare organisations (IT-Report Healthcare) as the environment and offered the participants of the 2011 survey (CIOs of hospitals) to enter a benchmark. The 61 structural and functional performance indicators covered among others the implementation status and integration of IT-systems and functions, global user satisfaction and the resources of the IT-department. Healthcare organisations were grouped by size and ownership. The benchmark results were made available electronically and feedback on the use of these results was requested after several months. Fifty-ninehospitals participated in the benchmarking. Reference groups consisted of up to 141 members depending on the number of beds (size) and the ownership (public vs. private). A total of 122 charts showing single indicator frequency views were sent to each participant. The evaluation showed that 94.1% of the CIOs who participated in the evaluation considered this benchmarking beneficial and reported that they would enter again. Based on the feedback of the participants we developed two additional views that provide a more consolidated picture. The results demonstrate that establishing an independent, easily repeatable and scalable IT-benchmarking procedure is possible and was deemed desirable. Based on these encouraging results a new benchmarking round which includes process indicators is currently conducted.
European and US publications in the 50 highest ranking pathology journals from 2000 to 2006.
Fritzsche, F R; Oelrich, B; Dietel, M; Jung, K; Kristiansen, G
2008-04-01
To analyse the contributions of the 15 primary member states of the European Union and selected non-European countries to pathological research between 2000 and 2006. Pathological journals were screened using ISI Web of Knowledge database. The number of publications and related impact factors were determined for each country. Relevant socioeconomic indicators were related to the scientific output. Subsequently, results were compared to publications in 10 of the leading biomedical journals. The research output remained generally stable. In Europe, the UK, Germany, France, Italy and Spain ranked top concerning contributions to publications and impact factors in the pathological and leading general biomedical journals. With regard to socioeconomic data, smaller, mainly northern European countries showed a relatively higher efficiency. Of the lager countries, the UK is the most efficient in that respect. The rising economic powers of China and India were consistently in the rear. Results mirror the leading role of the USA in pathology research but also show the relevance of European scientists. The scientometric approach in this study provides a new fundamental and comparative overview of pathology research in the European Union and the USA which could help to benchmark scientific output among countries.
ERIC Educational Resources Information Center
Stern, Luli; Ahlgren, Andrew
2002-01-01
Project 2061 of the American Association for the Advancement of Science (AAAS) developed and field-tested a procedure for analyzing curriculum materials, including assessments, in terms of contribution to the attainment of benchmarks and standards. Using this procedure, Project 2061 produced a database of reports on nine science middle school…
Interactive visual optimization and analysis for RFID benchmarking.
Wu, Yingcai; Chung, Ka-Kei; Qu, Huamin; Yuan, Xiaoru; Cheung, S C
2009-01-01
Radio frequency identification (RFID) is a powerful automatic remote identification technique that has wide applications. To facilitate RFID deployment, an RFID benchmarking instrument called aGate has been invented to identify the strengths and weaknesses of different RFID technologies in various environments. However, the data acquired by aGate are usually complex time varying multidimensional 3D volumetric data, which are extremely challenging for engineers to analyze. In this paper, we introduce a set of visualization techniques, namely, parallel coordinate plots, orientation plots, a visual history mechanism, and a 3D spatial viewer, to help RFID engineers analyze benchmark data visually and intuitively. With the techniques, we further introduce two workflow procedures (a visual optimization procedure for finding the optimum reader antenna configuration and a visual analysis procedure for comparing the performance and identifying the flaws of RFID devices) for the RFID benchmarking, with focus on the performance analysis of the aGate system. The usefulness and usability of the system are demonstrated in the user evaluation.
Paul Hagenmüller's contribution to solid state chemistry: A scientometric analysis
NASA Astrophysics Data System (ADS)
El Aichouchi, Adil; Gorry, Philippe
2018-06-01
Paul Hagenmüller (1921-2017) is an important figure of French solid-state chemistry, who enjoyed scientific and institutional recognition. He published 796 papers and has been cited more than 16,000 times. This paper explores Hagenmüller's work using scientometric analysis to reveal the impact of his work, his main research topics and his collaborations. Although Hagenmüller was a recognized scientist, a subset of his work, now highly cited, attracted little attention at the time of publication. To understand this phenomenon, we detect and study papers with delayed recognition, also called 'Sleeping Beauties' (SBs). In scientometrics, SBs are publications that go unnoticed, or 'sleep' for a long time before suddenly attracting a lot of attention in terms of citations. We identify 7 SBs published between 1965 and 1985, and awakened between 1993 and 2010. The first SB reports the discovery of the clathrate structure of silicon. The second reports the isolation of four new phases with the formula NaxCoO2 (x < =1). The five other SBs investigate the electrochemical intercalation and deintercalation of sodium, and the structure and properties of layered oxides. Through interviews with his coworkers, we attempt to identify the reasons for the delayed recognition and the context of the renewed interest in those papers.
IT-benchmarking of clinical workflows: concept, implementation, and evaluation.
Thye, Johannes; Straede, Matthias-Christopher; Liebe, Jan-David; Hübner, Ursula
2014-01-01
Due to the emerging evidence of health IT as opportunity and risk for clinical workflows, health IT must undergo a continuous measurement of its efficacy and efficiency. IT-benchmarks are a proven means for providing this information. The aim of this study was to enhance the methodology of an existing benchmarking procedure by including, in particular, new indicators of clinical workflows and by proposing new types of visualisation. Drawing on the concept of information logistics, we propose four workflow descriptors that were applied to four clinical processes. General and specific indicators were derived from these descriptors and processes. 199 chief information officers (CIOs) took part in the benchmarking. These hospitals were assigned to reference groups of a similar size and ownership from a total of 259 hospitals. Stepwise and comprehensive feedback was given to the CIOs. Most participants who evaluated the benchmark rated the procedure as very good, good, or rather good (98.4%). Benchmark information was used by CIOs for getting a general overview, advancing IT, preparing negotiations with board members, and arguing for a new IT project.
Comparison of mapping algorithms used in high-throughput sequencing: application to Ion Torrent data
2014-01-01
Background The rapid evolution in high-throughput sequencing (HTS) technologies has opened up new perspectives in several research fields and led to the production of large volumes of sequence data. A fundamental step in HTS data analysis is the mapping of reads onto reference sequences. Choosing a suitable mapper for a given technology and a given application is a subtle task because of the difficulty of evaluating mapping algorithms. Results In this paper, we present a benchmark procedure to compare mapping algorithms used in HTS using both real and simulated datasets and considering four evaluation criteria: computational resource and time requirements, robustness of mapping, ability to report positions for reads in repetitive regions, and ability to retrieve true genetic variation positions. To measure robustness, we introduced a new definition for a correctly mapped read taking into account not only the expected start position of the read but also the end position and the number of indels and substitutions. We developed CuReSim, a new read simulator, that is able to generate customized benchmark data for any kind of HTS technology by adjusting parameters to the error types. CuReSim and CuReSimEval, a tool to evaluate the mapping quality of the CuReSim simulated reads, are freely available. We applied our benchmark procedure to evaluate 14 mappers in the context of whole genome sequencing of small genomes with Ion Torrent data for which such a comparison has not yet been established. Conclusions A benchmark procedure to compare HTS data mappers is introduced with a new definition for the mapping correctness as well as tools to generate simulated reads and evaluate mapping quality. The application of this procedure to Ion Torrent data from the whole genome sequencing of small genomes has allowed us to validate our benchmark procedure and demonstrate that it is helpful for selecting a mapper based on the intended application, questions to be addressed, and the technology used. This benchmark procedure can be used to evaluate existing or in-development mappers as well as to optimize parameters of a chosen mapper for any application and any sequencing platform. PMID:24708189
Zhang, Yin; Diao, Tianxi; Wang, Lei
2014-12-01
Designed to advance the two-way translational process between basic research and clinical practice, translational medicine has become one of the most important areas in biomedicine. The quantitative evaluation of translational medicine is valuable for the decision making of global translational medical research and funding. Using the scientometric analysis and information extraction techniques, this study quantitatively analyzed the scientific articles on translational medicine. The results showed that translational medicine had significant scientific output and impact, specific core field and institute, and outstanding academic status and benefit. While it is not considered in this study, the patent data are another important indicators that should be integrated in the relevant research in the future. © 2014 Wiley Periodicals, Inc.
Fetisov, V A; Gusarov, A A; Khabova, Z S
2015-01-01
This paper presents the result of the analysis of the scientometric characteristics of the materials concerning the main aspects of research carried out in the framework of the speciality 14.03.05 (Forensic medicine published in the journal "Sudebno-meditsinskaya ekspertiza (Forensic Medical Expertise)" during a long period. The objective of the analysis was to establish the priorities in this field, reveal the lines of research of the highest interest for domestic and foreign authors, and estimate the value of the relevant scientific publications. It is concluded that the scientometric analysis of the scientific literature on the problems of forensic medical examination is indispensable for the further improvement of its quality.
Evaluation of Scientific Journal Validity, It's Articles and Their Authors.
Masic, Izet; Begic, Edin
2016-01-01
The science that deals with evaluation of a scientific article refer to the finding quantitative indicators (index) of the scientific research success is called scientometrics. Scientometrics is part of scientology (the science of science) that analyzes scientific papers and their citations in a selected sample of scientific journals. There are four indexes by which it is possible to measure the validity of scientific research: number of articles, impact factor of the journal, the number and order of authors and citations number. Every scientific article is a record of the data written by the rules recommended by several scientific associations and committees. Growing number of authors and lot of authors with same name and surname led to the introduction of the necessary identification agent - ORCID number.
Mostafavi, Ehsan; Bazrafshan, Azam
2014-01-01
Institut Pasteur International Network (IPIN), which includes 32 research institutes around the world, is a network of research and expertise to fight against infectious diseases. A scientometric approach was applied to describe research and collaboration activities of IPIN. Publications were identified using a manual search of IPIN member addresses in Science Citation Index Expanded (SCIE) between 2006 and 2011. Total publications were then subcategorized by geographic regions. Several scientometric indicators and the H-index were employed to estimate the scientific production of each IPIN member. Subject and geographical overlay maps were also applied to visualize the network activities of the IPIN members. A total number of 12667 publications originated from IPIN members. Each author produced an average number of 2.18 papers and each publication received an average of 13.40 citations. European Pasteur Institutes had the largest amount of publications, authored papers, and H-index values. Biochemistry and molecular biology, microbiology, immunology and infectious diseases were the most important research topics, respectively. Geographic mapping of IPIN publications showed wide international collaboration among IPIN members around the world. IPIN has strong ties with national and international authorities and organizations to investigate the current and future health issues. It is recommended to use scientometric and collaboration indicators as measures of research performance in IPIN future policies and investment decisions.
ERIC Educational Resources Information Center
Moskovkin, Vladimir M.; Bocharova, Emilia A.; Balashova, Oksana V.
2014-01-01
Purpose: The purpose of this paper is to introduce and develop the methodology of journal benchmarking. Design/Methodology/ Approach: The journal benchmarking method is understood to be an analytic procedure of continuous monitoring and comparing of the advance of specific journal(s) against that of competing journals in the same subject area,…
A Comparison of Coverage Restrictions for Biopharmaceuticals and Medical Procedures.
Chambers, James; Pope, Elle; Bungay, Kathy; Cohen, Joshua; Ciarametaro, Michael; Dubois, Robert; Neumann, Peter J
2018-04-01
Differences in payer evaluation and coverage of pharmaceuticals and medical procedures suggest that coverage may differ for medications and procedures independent of their clinical benefit. We hypothesized that coverage for medications is more restricted than corresponding coverage for nonmedication interventions. We included top-selling medications and highly utilized procedures. For each intervention-indication pair, we classified value in terms of cost-effectiveness (incremental cost per quality-adjusted life-year), as reported by the Tufts Medical Center Cost-Effectiveness Analysis Registry. For each intervention-indication pair and for each of 10 large payers, we classified coverage, when available, as either "more restrictive" or as "not more restrictive," compared with a benchmark. The benchmark reflected the US Food and Drug Administration label information, when available, or pertinent clinical guidelines. We compared coverage policies and the benchmark in terms of step edits and clinical restrictions. Finally, we regressed coverage restrictiveness against intervention type (medication or nonmedication), controlling for value (cost-effectiveness more or less favorable than a designated threshold). We identified 392 medication and 185 procedure coverage decisions. A total of 26.3% of the medication coverage and 38.4% of the procedure coverage decisions were more restrictive than their corresponding benchmarks. After controlling for value, the odds of being more restrictive were 42% lower for medications than for procedures. Including unfavorable tier placement in the definition of "more restrictive" greatly increased the proportion of medication coverage decisions classified as "more restrictive" and reversed our findings. Therapy access depends on factors other than cost and clinical benefit, suggesting potential health care system inefficiency. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Scholarly literature and the press: scientific impact and social perception of physics computing
NASA Astrophysics Data System (ADS)
Pia, M. G.; Basaglia, T.; Bell, Z. W.; Dressendorfer, P. V.
2014-06-01
The broad coverage of the search for the Higgs boson in the mainstream media is a relative novelty for high energy physics (HEP) research, whose achievements have traditionally been limited to scholarly literature. This paper illustrates the results of a scientometric analysis of HEP computing in scientific literature, institutional media and the press, and a comparative overview of similar metrics concerning representative particle physics measurements. The picture emerging from these scientometric data documents the relationship between the scientific impact and the social perception of HEP physics research versus that of HEP computing. The results of this analysis suggest that improved communication of the scientific and social role of HEP computing via press releases from the major HEP laboratories would be beneficial to the high energy physics community.
NASA Technical Reports Server (NTRS)
Krueger, Ronald
2012-01-01
The development of benchmark examples for quasi-static delamination propagation and cyclic delamination onset and growth prediction is presented and demonstrated for Abaqus/Standard. The example is based on a finite element model of a Double-Cantilever Beam specimen. The example is independent of the analysis software used and allows the assessment of the automated delamination propagation, onset and growth prediction capabilities in commercial finite element codes based on the virtual crack closure technique (VCCT). First, a quasi-static benchmark example was created for the specimen. Second, based on the static results, benchmark examples for cyclic delamination growth were created. Third, the load-displacement relationship from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. Fourth, starting from an initially straight front, the delamination was allowed to grow under cyclic loading. The number of cycles to delamination onset and the number of cycles during delamination growth for each growth increment were obtained from the automated analysis and compared to the benchmark examples. Again, good agreement between the results obtained from the growth analysis and the benchmark results could be achieved by selecting the appropriate input parameters. The benchmarking procedure proved valuable by highlighting the issues associated with choosing the input parameters of the particular implementation. Selecting the appropriate input parameters, however, was not straightforward and often required an iterative procedure. Overall the results are encouraging, but further assessment for mixed-mode delamination is required.
Development of Benchmark Examples for Static Delamination Propagation and Fatigue Growth Predictions
NASA Technical Reports Server (NTRS)
Kruger, Ronald
2011-01-01
The development of benchmark examples for static delamination propagation and cyclic delamination onset and growth prediction is presented and demonstrated for a commercial code. The example is based on a finite element model of an End-Notched Flexure (ENF) specimen. The example is independent of the analysis software used and allows the assessment of the automated delamination propagation, onset and growth prediction capabilities in commercial finite element codes based on the virtual crack closure technique (VCCT). First, static benchmark examples were created for the specimen. Second, based on the static results, benchmark examples for cyclic delamination growth were created. Third, the load-displacement relationship from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. Fourth, starting from an initially straight front, the delamination was allowed to grow under cyclic loading. The number of cycles to delamination onset and the number of cycles during stable delamination growth for each growth increment were obtained from the automated analysis and compared to the benchmark examples. Again, good agreement between the results obtained from the growth analysis and the benchmark results could be achieved by selecting the appropriate input parameters. The benchmarking procedure proved valuable by highlighting the issues associated with the input parameters of the particular implementation. Selecting the appropriate input parameters, however, was not straightforward and often required an iterative procedure. Overall, the results are encouraging but further assessment for mixed-mode delamination is required.
NASA Technical Reports Server (NTRS)
Krueger, Ronald
2011-01-01
The development of benchmark examples for static delamination propagation and cyclic delamination onset and growth prediction is presented and demonstrated for a commercial code. The example is based on a finite element model of an End-Notched Flexure (ENF) specimen. The example is independent of the analysis software used and allows the assessment of the automated delamination propagation, onset and growth prediction capabilities in commercial finite element codes based on the virtual crack closure technique (VCCT). First, static benchmark examples were created for the specimen. Second, based on the static results, benchmark examples for cyclic delamination growth were created. Third, the load-displacement relationship from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. Fourth, starting from an initially straight front, the delamination was allowed to grow under cyclic loading. The number of cycles to delamination onset and the number of cycles during delamination growth for each growth increment were obtained from the automated analysis and compared to the benchmark examples. Again, good agreement between the results obtained from the growth analysis and the benchmark results could be achieved by selecting the appropriate input parameters. The benchmarking procedure proved valuable by highlighting the issues associated with choosing the input parameters of the particular implementation. Selecting the appropriate input parameters, however, was not straightforward and often required an iterative procedure. Overall the results are encouraging, but further assessment for mixed-mode delamination is required.
A benchmarking procedure for PIGE related differential cross-sections
NASA Astrophysics Data System (ADS)
Axiotis, M.; Lagoyannis, A.; Fazinić, S.; Harissopulos, S.; Kokkoris, M.; Preketes-Sigalas, K.; Provatas, G.
2018-05-01
The application of standard-less PIGE requires the a priori knowledge of the differential cross section of the reaction used for the quantification of each detected light element. Towards this end, a lot of datasets have been published the last few years from several laboratories around the world. The discrepancies often found between different measured cross sections can be resolved by applying a rigorous benchmarking procedure through the measurement of thick target yields. Such a procedure is proposed in the present paper and is applied in the case of the 19F(p,p‧ γ)19F reaction.
[Benchmarking in patient identification: An opportunity to learn].
Salazar-de-la-Guerra, R M; Santotomás-Pajarrón, A; González-Prieto, V; Menéndez-Fraga, M D; Rocha Hurtado, C
To perform a benchmarking on the safe identification of hospital patients involved in "Club de las tres C" (Calidez, Calidad y Cuidados) in order to prepare a common procedure for this process. A descriptive study was conducted on the patient identification process in palliative care and stroke units in 5medium-stay hospitals. The following steps were carried out: Data collection from each hospital; organisation and data analysis, and preparation of a common procedure for this process. The data obtained for the safe identification of all stroke patients were: hospital 1 (93%), hospital 2 (93.1%), hospital 3 (100%), and hospital 5 (93.4%), and for the palliative care process: hospital 1 (93%), hospital 2 (92.3%), hospital 3 (92%), hospital 4 (98.3%), and hospital 5 (85.2%). The aim of the study has been accomplished successfully. Benchmarking activities have been developed and knowledge on the patient identification process has been shared. All hospitals had good results. The hospital 3 was best in the ictus identification process. The benchmarking identification is difficult, but, a useful common procedure that collects the best practices has been identified among the 5 hospitals. Copyright © 2017 SECA. Publicado por Elsevier España, S.L.U. All rights reserved.
Scientometrics of drug discovery efforts: pain-related molecular targets.
Kissin, Igor
2015-01-01
The aim of this study was to make a scientometric assessment of drug discovery efforts centered on pain-related molecular targets. The following scientometric indices were used: the popularity index, representing the share of articles (or patents) on a specific topic among all articles (or patents) on pain over the same 5-year period; the index of change, representing the change in the number of articles (or patents) on a topic from one 5-year period to the next; the index of expectations, representing the ratio of the number of all types of articles on a topic in the top 20 journals relative to the number of articles in all (>5,000) biomedical journals covered by PubMed over a 5-year period; the total number of articles representing Phase I-III trials of investigational drugs over a 5-year period; and the trial balance index, a ratio of Phase I-II publications to Phase III publications. Articles (PubMed database) and patents (US Patent and Trademark Office database) on 17 topics related to pain mechanisms were assessed during six 5-year periods from 1984 to 2013. During the most recent 5-year period (2009-2013), seven of 17 topics have demonstrated high research activity (purinergic receptors, serotonin, transient receptor potential channels, cytokines, gamma aminobutyric acid, glutamate, and protein kinases). However, even with these seven topics, the index of expectations decreased or did not change compared with the 2004-2008 period. In addition, publications representing Phase I-III trials of investigational drugs (2009-2013) did not indicate great enthusiasm on the part of the pharmaceutical industry regarding drugs specifically designed for treatment of pain. A promising development related to the new tool of molecular targeting, ie, monoclonal antibodies, for pain treatment has not yet resulted in real success. This approach has not yet demonstrated clinical effectiveness (at least with nerve growth factor) much beyond conventional analgesics, when its potential cost is more than an order of magnitude higher than that of conventional treatments. This scientometric assessment demonstrated a lack of real breakthrough developments.
Scientometrics of drug discovery efforts: pain-related molecular targets
Kissin, Igor
2015-01-01
The aim of this study was to make a scientometric assessment of drug discovery efforts centered on pain-related molecular targets. The following scientometric indices were used: the popularity index, representing the share of articles (or patents) on a specific topic among all articles (or patents) on pain over the same 5-year period; the index of change, representing the change in the number of articles (or patents) on a topic from one 5-year period to the next; the index of expectations, representing the ratio of the number of all types of articles on a topic in the top 20 journals relative to the number of articles in all (>5,000) biomedical journals covered by PubMed over a 5-year period; the total number of articles representing Phase I–III trials of investigational drugs over a 5-year period; and the trial balance index, a ratio of Phase I–II publications to Phase III publications. Articles (PubMed database) and patents (US Patent and Trademark Office database) on 17 topics related to pain mechanisms were assessed during six 5-year periods from 1984 to 2013. During the most recent 5-year period (2009–2013), seven of 17 topics have demonstrated high research activity (purinergic receptors, serotonin, transient receptor potential channels, cytokines, gamma aminobutyric acid, glutamate, and protein kinases). However, even with these seven topics, the index of expectations decreased or did not change compared with the 2004–2008 period. In addition, publications representing Phase I–III trials of investigational drugs (2009–2013) did not indicate great enthusiasm on the part of the pharmaceutical industry regarding drugs specifically designed for treatment of pain. A promising development related to the new tool of molecular targeting, ie, monoclonal antibodies, for pain treatment has not yet resulted in real success. This approach has not yet demonstrated clinical effectiveness (at least with nerve growth factor) much beyond conventional analgesics, when its potential cost is more than an order of magnitude higher than that of conventional treatments. This scientometric assessment demonstrated a lack of real breakthrough developments. PMID:26170624
Benchmarking gate-based quantum computers
NASA Astrophysics Data System (ADS)
Michielsen, Kristel; Nocon, Madita; Willsch, Dennis; Jin, Fengping; Lippert, Thomas; De Raedt, Hans
2017-11-01
With the advent of public access to small gate-based quantum processors, it becomes necessary to develop a benchmarking methodology such that independent researchers can validate the operation of these processors. We explore the usefulness of a number of simple quantum circuits as benchmarks for gate-based quantum computing devices and show that circuits performing identity operations are very simple, scalable and sensitive to gate errors and are therefore very well suited for this task. We illustrate the procedure by presenting benchmark results for the IBM Quantum Experience, a cloud-based platform for gate-based quantum computing.
Reliability of hospital cost profiles in inpatient surgery.
Grenda, Tyler R; Krell, Robert W; Dimick, Justin B
2016-02-01
With increased policy emphasis on shifting risk from payers to providers through mechanisms such as bundled payments and accountable care organizations, hospitals are increasingly in need of metrics to understand their costs relative to peers. However, it is unclear whether Medicare payments for surgery can reliably compare hospital costs. We used national Medicare data to assess patients undergoing colectomy, pancreatectomy, and open incisional hernia repair from 2009 to 2010 (n = 339,882 patients). We first calculated risk-adjusted hospital total episode payments for each procedure. We then used hierarchical modeling techniques to estimate the reliability of total episode payments for each procedure and explored the impact of hospital caseload on payment reliability. Finally, we quantified the number of hospitals meeting published reliability benchmarks. Mean risk-adjusted total episode payments ranged from $13,262 (standard deviation [SD] $14,523) for incisional hernia repair to $25,055 (SD $22,549) for pancreatectomy. The reliability of hospital episode payments varied widely across procedures and depended on sample size. For example, mean episode payment reliability for colectomy (mean caseload, 157) was 0.80 (SD 0.18), whereas for pancreatectomy (mean caseload, 13) the mean reliability was 0.45 (SD 0.27). Many hospitals met published reliability benchmarks for each procedure. For example, 90% of hospitals met reliability benchmarks for colectomy, 40% for pancreatectomy, and 66% for incisional hernia repair. Episode payments for inpatient surgery are a reliable measure of hospital costs for commonly performed procedures, but are less reliable for lower volume operations. These findings suggest that hospital cost profiles based on Medicare claims data may be used to benchmark efficiency, especially for more common procedures. Copyright © 2016 Elsevier Inc. All rights reserved.
Schöffel, Norman; Gfroerer, Stefan; Rolle, Udo; Bendels, Michael H K; Klingelhöfer, Doris; Groneberg-Kloft, Beatrix
2017-04-01
Introduction Hirschsprung disease (HD) is a congenital bowel innervation disorder that involves several clinical specialties. There is an increasing interest on the topic reflected by the number of annually published items. It is therefore difficult for a single scientist to survey all published items and to gauge their scientific importance or value. Thus, tremendous efforts were made to establish sustainable parameters to evaluate scientific work within the past decades. It was the birth of scientometrics. Materials and Methods To quantify the global research activity in this field, a scientometric analysis was conducted. We analyzed the research output of countries, individual institutions, authors, and their collaborative networks by using the Web of Science database. Density-equalizing maps and network diagrams were employed as state of the art visualization techniques. Results The United States is the leading country in terms of published items ( n = 685), institutions ( n = 347), and cooperation ( n = 112). However, although there is dominance in quantity, the most intensive international networks between authors and institutions are not linked to the United States. By contrast, most of the European countries combine the highest impact of publications. Further analysis reveal the influence of international cooperation and associated phenomena on the research field HD. Conclusion We conclude that the field of HD is constantly progressing. The importance of international cooperation in the scientific community is continuously growing. Georg Thieme Verlag KG Stuttgart · New York.
Rezaee Zavareh, Mohammad Saeid; Alavian, Seyed Moayed
2017-01-01
In the Middle East (ME), the proper understanding of hepatitis, especially viral hepatitis, is considered to be extremely important. However, no published paper has investigated the status of hepatitis-related research in the ME. A scientometric analysis based on the Web of Science database was conducted on hepatitis-related papers in the ME to determine the current status of research on this topic. A scientometric analysis using the Web of Science database, specifically articles from the Expanded Science Citation Index and Social Sciences Citation Index, was conducted on work published between 2005 and 2014 using the keyword "hepatitis" in conjunction with the names of countries in the ME. Of 103,096 papers that used the word "hepatitis" in their title, abstract, or keywords, only 6,540 papers (6.34%) were associated with countries in the ME. Turkey, Iran, Egypt, Israel, and Saudi Arabia were the top five countries in which hepatitis-related papers were published. Most papers on hepatitis A, B, and D and autoimmune hepatitis were published in Turkey, and most papers on hepatitis C were published in Egypt. We believe that both the quantity and the quality of hepatitis-related papers in this region should be improved. Implementing multicenter and international research projects, holding conferences and congress meetings, conducting educational workshops, and establishing high-quality medical research journals in the region will help countries in the ME address this issue effectively.
Development and Applications of Benchmark Examples for Static Delamination Propagation Predictions
NASA Technical Reports Server (NTRS)
Krueger, Ronald
2013-01-01
The development and application of benchmark examples for the assessment of quasistatic delamination propagation capabilities was demonstrated for ANSYS (TradeMark) and Abaqus/Standard (TradeMark). The examples selected were based on finite element models of Double Cantilever Beam (DCB) and Mixed-Mode Bending (MMB) specimens. First, quasi-static benchmark results were created based on an approach developed previously. Second, the delamination was allowed to propagate under quasi-static loading from its initial location using the automated procedure implemented in ANSYS (TradeMark) and Abaqus/Standard (TradeMark). Input control parameters were varied to study the effect on the computed delamination propagation. Overall, the benchmarking procedure proved valuable by highlighting the issues associated with choosing the appropriate input parameters for the VCCT implementations in ANSYS® and Abaqus/Standard®. However, further assessment for mixed-mode delamination fatigue onset and growth is required. Additionally studies should include the assessment of the propagation capabilities in more complex specimens and on a structural level.
NASA Technical Reports Server (NTRS)
Krueger, Ronald
2012-01-01
The application of benchmark examples for the assessment of quasi-static delamination propagation capabilities is demonstrated for ANSYS. The examples are independent of the analysis software used and allow the assessment of the automated delamination propagation in commercial finite element codes based on the virtual crack closure technique (VCCT). The examples selected are based on two-dimensional finite element models of Double Cantilever Beam (DCB), End-Notched Flexure (ENF), Mixed-Mode Bending (MMB) and Single Leg Bending (SLB) specimens. First, the quasi-static benchmark examples were recreated for each specimen using the current implementation of VCCT in ANSYS . Second, the delamination was allowed to propagate under quasi-static loading from its initial location using the automated procedure implemented in the finite element software. Third, the load-displacement relationship from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. The benchmarking procedure proved valuable by highlighting the issues associated with choosing the input parameters of the particular implementation. Overall the results are encouraging, but further assessment for three-dimensional solid models is required.
78 FR 8964 - Environmental Impact and Related Procedures
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-07
... designed so that no significant impact will occur. FTA is deleting, however, some items in the list of... supporting documentation, which includes, but is not limited to, comparative benchmarking and expert opinion... fall within the ten broad categories. Comparative benchmarking provides support for the new CEs by...
Benchmarking in Academic Pharmacy Departments
Chisholm-Burns, Marie; Nappi, Jean; Gubbins, Paul O.; Ross, Leigh Ann
2010-01-01
Benchmarking in academic pharmacy, and recommendations for the potential uses of benchmarking in academic pharmacy departments are discussed in this paper. Benchmarking is the process by which practices, procedures, and performance metrics are compared to an established standard or best practice. Many businesses and industries use benchmarking to compare processes and outcomes, and ultimately plan for improvement. Institutions of higher learning have embraced benchmarking practices to facilitate measuring the quality of their educational and research programs. Benchmarking is used internally as well to justify the allocation of institutional resources or to mediate among competing demands for additional program staff or space. Surveying all chairs of academic pharmacy departments to explore benchmarking issues such as department size and composition, as well as faculty teaching, scholarly, and service productivity, could provide valuable information. To date, attempts to gather this data have had limited success. We believe this information is potentially important, urge that efforts to gather it should be continued, and offer suggestions to achieve full participation. PMID:21179251
Benchmarking in academic pharmacy departments.
Bosso, John A; Chisholm-Burns, Marie; Nappi, Jean; Gubbins, Paul O; Ross, Leigh Ann
2010-10-11
Benchmarking in academic pharmacy, and recommendations for the potential uses of benchmarking in academic pharmacy departments are discussed in this paper. Benchmarking is the process by which practices, procedures, and performance metrics are compared to an established standard or best practice. Many businesses and industries use benchmarking to compare processes and outcomes, and ultimately plan for improvement. Institutions of higher learning have embraced benchmarking practices to facilitate measuring the quality of their educational and research programs. Benchmarking is used internally as well to justify the allocation of institutional resources or to mediate among competing demands for additional program staff or space. Surveying all chairs of academic pharmacy departments to explore benchmarking issues such as department size and composition, as well as faculty teaching, scholarly, and service productivity, could provide valuable information. To date, attempts to gather this data have had limited success. We believe this information is potentially important, urge that efforts to gather it should be continued, and offer suggestions to achieve full participation.
Chandra, Yanto
2018-01-01
This article applies scientometric techniques to study the evolution of the field of entrepreneurship between 1990 and 2013. Using a combination of topic mapping, author and journal co-citation analyses, and overlay visualization of new and hot topics in the field, this article makes important contribution to the entrepreneurship research by identifying 46 topics in the 24-year history of entrepreneurship research and demonstrates how they appear, disappear, reappear and stabilize over time. It also identifies five topics that are persistent across the 24-year study period--institutions and institutional entrepreneurship, innovation and technology management, policy and development, entrepreneurial process and opportunity, and new ventures--which I labeled as The Pentagon of Entrepreneurship. Overall, the analyses revealed patterns of convergence and divergence and the diversity of topics, specialization, and interdisciplinary engagement in entrepreneurship research, thus offering the latest insights on the state of the art of the field.
Linked data scientometrics in semantic e-Science
NASA Astrophysics Data System (ADS)
Narock, Tom; Wimmer, Hayden
2017-03-01
The Semantic Web is inherently multi-disciplinary and many domains have taken advantage of semantic technologies. Yet, the geosciences are one of the fields leading the way in Semantic Web adoption and validation. Astronomy, Earth science, hydrology, and solar-terrestrial physics have seen a noteworthy amount of semantic integration. The geoscience community has been willing early adopters of semantic technologies and have provided essential feedback to the broader semantic web community. Yet, there has been no systematic study of the community as a whole and there exists no quantitative data on the impact and status of semantic technologies in the geosciences. We explore the applicability of Linked Data to scientometrics in the geosciences. In doing so, we gain an initial understanding of the breadth and depth of the Semantic Web in the geosciences. We identify what appears to be a transitionary period in the applicability of these technologies.
Scientometric methods for identifying emerging technologies
Abercrombie, Robert K; Schlicher, Bob G; Sheldon, Frederick T
2015-11-03
Provided is a method of generating a scientometric model that tracks the emergence of an identified technology from initial discovery (via original scientific and conference literature), through critical discoveries (via original scientific, conference literature and patents), transitioning through Technology Readiness Levels (TRLs) and ultimately on to commercial application. During the period of innovation and technology transfer, the impact of scholarly works, patents and on-line web news sources are identified. As trends develop, currency of citations, collaboration indicators, and on-line news patterns are identified. The combinations of four distinct and separate searchable on-line networked sources (i.e., scholarly publications and citation, worldwide patents, news archives, and on-line mapping networks) are assembled to become one collective network (a dataset for analysis of relations). This established network becomes the basis from which to quickly analyze the temporal flow of activity (searchable events) for the example subject domain.
NASA Astrophysics Data System (ADS)
Capo-Lugo, Pedro A.
Formation flying consists of multiple spacecraft orbiting in a required configuration about a planet or through Space. The National Aeronautics and Space Administration (NASA) Benchmark Tetrahedron Constellation is one of the proposed constellations to be launched in the year 2009 and provides the motivation for this investigation. The problem that will be researched here consists of three stages. The first stage contains the deployment of the satellites; the second stage is the reconfiguration process to transfer the satellites through different specific sizes of the NASA benchmark problem; and, the third stage is the station-keeping procedure for the tetrahedron constellation. Every stage contains different control schemes and transfer procedures to obtain/maintain the proposed tetrahedron constellation. In the first stage, the deployment procedure will depend on a combination of two techniques in which impulsive maneuvers and a digital controller are used to deploy the satellites and to maintain the tetrahedron constellation at the following apogee point. The second stage that corresponds to the reconfiguration procedure shows a different control scheme in which the intelligent control systems are implemented to perform this procedure. In this research work, intelligent systems will eliminate the use of complex mathematical models and will reduce the computational time to perform different maneuvers. Finally, the station-keeping process, which is the third stage of this research problem, will be implemented with a two-level hierarchical control scheme to maintain the separation distance constraints of the NASA Benchmark Tetrahedron Constellation. For this station-keeping procedure, the system of equations defining the dynamics of a pair of satellites is transformed to take in account the perturbation due to the oblateness of the Earth and the disturbances due to solar pressure. The control procedures used in this research will be transformed from a continuous control system to a digital control system which will simplify the implementation into the computer onboard the satellite. In addition, this research will show an introductory chapter on attitude dynamics that can be used to maintain the orientation of the satellites, and an adaptive intelligent control scheme will be proposed to maintain the desired orientation of the spacecraft. In conclusion, a solution for the dynamics of the NASA Benchmark Tetrahedron Constellation will be presented in this research work. The main contribution of this work is the use of discrete control schemes, impulsive maneuvers, and intelligent control schemes that can be used to reduce the computational time in which these control schemes can be easily implemented in the computer onboard the satellite. These contributions are explained through the deployment, reconfiguration, and station-keeping process of the proposed NASA Benchmark Tetrahedron Constellation.
Evaluation of control strategies using an oxidation ditch benchmark.
Abusam, A; Keesman, K J; Spanjers, H; van, Straten G; Meinema, K
2002-01-01
This paper presents validation and implementation results of a benchmark developed for a specific full-scale oxidation ditch wastewater treatment plant. A benchmark is a standard simulation procedure that can be used as a tool in evaluating various control strategies proposed for wastewater treatment plants. It is based on model and performance criteria development. Testing of this benchmark, by comparing benchmark predictions to real measurements of the electrical energy consumptions and amounts of disposed sludge for a specific oxidation ditch WWTP, has shown that it can (reasonably) be used for evaluating the performance of this WWTP. Subsequently, the validated benchmark was then used in evaluating some basic and advanced control strategies. Some of the interesting results obtained are the following: (i) influent flow splitting ratio, between the first and the fourth aerated compartments of the ditch, has no significant effect on the TN concentrations in the effluent, and (ii) for evaluation of long-term control strategies, future benchmarks need to be able to assess settlers' performance.
ERIC Educational Resources Information Center
Eaneman, Paulette S.; And Others
These materials are part of the Project Benchmark series designed to teach secondary students about our legal concepts and systems. This unit focuses on the structure and procedures of the civil court systems. The materials outline common law heritage, kinds of cases, jurisdiction, civil pretrial procedure, trial procedure, and a sample automobile…
BIOREL: the benchmark resource to estimate the relevance of the gene networks.
Antonov, Alexey V; Mewes, Hans W
2006-02-06
The progress of high-throughput methodologies in functional genomics has lead to the development of statistical procedures to infer gene networks from various types of high-throughput data. However, due to the lack of common standards, the biological significance of the results of the different studies is hard to compare. To overcome this problem we propose a benchmark procedure and have developed a web resource (BIOREL), which is useful for estimating the biological relevance of any genetic network by integrating different sources of biological information. The associations of each gene from the network are classified as biologically relevant or not. The proportion of genes in the network classified as "relevant" is used as the overall network relevance score. Employing synthetic data we demonstrated that such a score ranks the networks fairly in respect to the relevance level. Using BIOREL as the benchmark resource we compared the quality of experimental and theoretically predicted protein interaction data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McLoughlin, K.
2016-01-22
The software application “MetaQuant” was developed by our group at Lawrence Livermore National Laboratory (LLNL). It is designed to profile microbial populations in a sample using data from whole-genome shotgun (WGS) metagenomic DNA sequencing. Several other metagenomic profiling applications have been described in the literature. We ran a series of benchmark tests to compare the performance of MetaQuant against that of a few existing profiling tools, using real and simulated sequence datasets. This report describes our benchmarking procedure and results.
Emerging trends and new developments in regenerative medicine: a scientometric update (2000 - 2014).
Chen, Chaomei; Dubin, Rachael; Kim, Meen Chul
2014-09-01
Our previous scientometric review of regenerative medicine provides a snapshot of the fast-growing field up to the end of 2011. The new review identifies emerging trends and new developments appearing in the literature of regenerative medicine based on relevant articles and reviews published between 2000 and the first month of 2014. Multiple datasets of publications relevant to regenerative medicine are constructed through topic search and citation expansion to ensure adequate coverage of the field. Networks of co-cited references representing the literature of regenerative medicine are constructed and visualized based on a combined dataset of 71,393 articles published between 2000 and 2014. Structural and temporal dynamics are identified in terms of most active topical areas and cited references. New developments are identified in terms of newly emerged clusters and research areas. Disciplinary-level patterns are visualized in dual-map overlays. While research in induced pluripotent stem cells remains the most prominent area in the field of regenerative medicine, research related to clinical and therapeutic applications in regenerative medicine has experienced a considerable growth. In addition, clinical and therapeutic developments in regenerative medicine have demonstrated profound connections with the induced pluripotent stem cell research and stem cell research in general. A rapid adaptation of graphene-based nanomaterials in regenerative medicine is evident. Both basic research represented by stem cell research and application-oriented research typically found in tissue engineering are now increasingly integrated in the scientometric landscape of regenerative medicine. Tissue engineering is an interdisciplinary field in its own right. Advances in multiple disciplines such as stem cell research and graphene research have strengthened the connections between tissue engineering and regenerative medicine.
PubMed-Indexed Dental Publications from Iran: A Scientometric Study
Asgary, Saeed; Sabbagh, Sedigheh; Shirazi, Alireza Sarraf; Ahmadyar, Maryam; Shahravan, Arash; Akhoundi, Mohammad Sadegh Ahmad
2016-01-01
Objectives: Scientometric methods and the resulting citations have been applied to investigate the scientific performance of a nation. The present study was designed to collect the statistical information of dental articles by Iranian authors published in PubMed. Materials and Methods: We searched the PubMed database for dental articles of Iranian authors until June 31, 2015. All abstracts were manually reviewed in order to exclude false retrievals. The number of articles per dental subspecialties, distribution of research designs, Scopus/Google Scholar citation of each article, number of authors and affiliation of the first/corresponding author were extracted and transferred to Microsoft Excel. The data were further analyzed to illustrate the related scientometric indicators. Results: A total of 3,835 articles were retrieved according to the selection criteria. The number of PubMed-indexed publications between 2008 and 2015 showed a seven-fold increase. The majority of articles were written by four authors (24.56%). Systematic reviews and clinical trials constituted 9.20% of all publications. The number and percentage of articles with ≥4 citations from Google Scholar (n=2024; 52.78%) were higher than those from Scopus (n=1015; 26.47%). According to affiliated departments of the first authors, the top three dental subspecialties with the highest number of publications belonged to endodontics (19.82%), orthodontics (11.13%) and oral and maxillofacial surgery (10.33%). Moreover, the majority of articles originated from Shahid Beheshti- (14.47%), Tehran- (13.72%) and Mashhad- (12.28%) University of Medical Sciences. Conclusions: Analysis of PubMed-indexed dental publications originating from Iran revealed a growing trend in the recent years. PMID:28392812
Scientometrics of anesthetic drugs and their techniques of administration, 1984-2013.
Vlassakov, Kamen V; Kissin, Igor
2014-01-01
The aim of this study was to assess progress in the field of anesthetic drugs over the past 30 years using scientometric indices: popularity indices (general and specific), representing the proportion of articles on a drug relative to all articles in the field of anesthetics (general index) or the subfield of a specific class of anesthetics (specific index); index of change, representing the degree of growth in publications on a topic from one period to the next; index of expectations, representing the ratio of the number of articles on a topic in the top 20 journals relative to the number of articles in all (>5,000) biomedical journals covered by PubMed; and index of ultimate success, representing a publication outcome when a new drug takes the place of a common drug previously used for the same purpose. Publications on 58 topics were assessed during six 5-year periods from 1984 to 2013. Our analysis showed that during 2009-2013, out of seven anesthetics with a high general popularity index (≥2.0), only two were introduced after 1980, ie, the inhaled anesthetic sevoflurane and the local anesthetic ropivacaine; however, only sevoflurane had a high index of expectations (12.1). Among anesthetic adjuncts, in 2009-2013, only one agent, sugammadex, had both an extremely high index of change (>100) and a high index of expectations (25.0), reflecting the novelty of its mechanism of action. The index of ultimate success was positive with three anesthetics, ie, lidocaine, isoflurane, and propofol, all of which were introduced much longer than 30 years ago. For the past 30 years, there were no new anesthetics that have produced changes in scientometric indices indicating real progress.
Scientometrics of anesthetic drugs and their techniques of administration, 1984–2013
Vlassakov, Kamen V; Kissin, Igor
2014-01-01
The aim of this study was to assess progress in the field of anesthetic drugs over the past 30 years using scientometric indices: popularity indices (general and specific), representing the proportion of articles on a drug relative to all articles in the field of anesthetics (general index) or the subfield of a specific class of anesthetics (specific index); index of change, representing the degree of growth in publications on a topic from one period to the next; index of expectations, representing the ratio of the number of articles on a topic in the top 20 journals relative to the number of articles in all (>5,000) biomedical journals covered by PubMed; and index of ultimate success, representing a publication outcome when a new drug takes the place of a common drug previously used for the same purpose. Publications on 58 topics were assessed during six 5-year periods from 1984 to 2013. Our analysis showed that during 2009–2013, out of seven anesthetics with a high general popularity index (≥2.0), only two were introduced after 1980, ie, the inhaled anesthetic sevoflurane and the local anesthetic ropivacaine; however, only sevoflurane had a high index of expectations (12.1). Among anesthetic adjuncts, in 2009–2013, only one agent, sugammadex, had both an extremely high index of change (>100) and a high index of expectations (25.0), reflecting the novelty of its mechanism of action. The index of ultimate success was positive with three anesthetics, ie, lidocaine, isoflurane, and propofol, all of which were introduced much longer than 30 years ago. For the past 30 years, there were no new anesthetics that have produced changes in scientometric indices indicating real progress. PMID:25525336
The philosophy of benchmark testing a standards-based picture archiving and communications system.
Richardson, N E; Thomas, J A; Lyche, D K; Romlein, J; Norton, G S; Dolecek, Q E
1999-05-01
The Department of Defense issued its requirements for a Digital Imaging Network-Picture Archiving and Communications System (DIN-PACS) in a Request for Proposals (RFP) to industry in January 1997, with subsequent contracts being awarded in November 1997 to the Agfa Division of Bayer and IBM Global Government Industry. The Government's technical evaluation process consisted of evaluating a written technical proposal as well as conducting a benchmark test of each proposed system at the vendor's test facility. The purpose of benchmark testing was to evaluate the performance of the fully integrated system in a simulated operational environment. The benchmark test procedures and test equipment were developed through a joint effort between the Government, academic institutions, and private consultants. Herein the authors discuss the resources required and the methods used to benchmark test a standards-based PACS.
A benchmarking method to measure dietary absorption efficiency of chemicals by fish.
Xiao, Ruiyang; Adolfsson-Erici, Margaretha; Åkerman, Gun; McLachlan, Michael S; MacLeod, Matthew
2013-12-01
Understanding the dietary absorption efficiency of chemicals in the gastrointestinal tract of fish is important from both a scientific and a regulatory point of view. However, reported fish absorption efficiencies for well-studied chemicals are highly variable. In the present study, the authors developed and exploited an internal chemical benchmarking method that has the potential to reduce uncertainty and variability and, thus, to improve the precision of measurements of fish absorption efficiency. The authors applied the benchmarking method to measure the gross absorption efficiency for 15 chemicals with a wide range of physicochemical properties and structures. They selected 2,2',5,6'-tetrachlorobiphenyl (PCB53) and decabromodiphenyl ethane as absorbable and nonabsorbable benchmarks, respectively. Quantities of chemicals determined in fish were benchmarked to the fraction of PCB53 recovered in fish, and quantities of chemicals determined in feces were benchmarked to the fraction of decabromodiphenyl ethane recovered in feces. The performance of the benchmarking procedure was evaluated based on the recovery of the test chemicals and precision of absorption efficiency from repeated tests. Benchmarking did not improve the precision of the measurements; after benchmarking, however, the median recovery for 15 chemicals was 106%, and variability of recoveries was reduced compared with before benchmarking, suggesting that benchmarking could account for incomplete extraction of chemical in fish and incomplete collection of feces from different tests. © 2013 SETAC.
NASA Astrophysics Data System (ADS)
Hanssen, R. F.
2017-12-01
In traditional geodesy, one is interested in determining the coordinates, or the change in coordinates, of predefined benchmarks. These benchmarks are clearly identifiable and are especially established to be representative of the signal of interest. This holds, e.g., for leveling benchmarks, for triangulation/trilateration benchmarks, and for GNSS benchmarks. The desired coordinates are not identical to the basic measurements, and need to be estimated using robust estimation procedures, where the stochastic nature of the measurements is taken into account. For InSAR, however, the `benchmarks' are not predefined. In fact, usually we do not know where an effective benchmark is located, even though we can determine its dynamic behavior pretty well. This poses several significant problems. First, we cannot describe the quality of the measurements, unless we already know the dynamic behavior of the benchmark. Second, if we don't know the quality of the measurements, we cannot compute the quality of the estimated parameters. Third, rather harsh assumptions need to be made to produce a result. These (usually implicit) assumptions differ between processing operators and the used software, and are severely affected by the amount of available data. Fourth, the `relative' nature of the final estimates is usually not explicitly stated, which is particularly problematic for non-expert users. Finally, whereas conventional geodesy applies rigorous testing to check for measurement or model errors, this is hardly ever done in InSAR-geodesy. These problems make it rather impossible to provide a precise, reliable, repeatable, and `universal' InSAR product or service. Here we evaluate the requirements and challenges to move towards InSAR as a geodetically-proof product. In particular this involves the explicit inclusion of contextual information, as well as InSAR procedures, standards and a technical protocol, supported by the International Association of Geodesy and the international scientific community.
Global Nanotribology Research Output (1996–2010): A Scientometric Analysis
Elango, Bakthavachalam; Rajendran, Periyaswamy; Bornmann, Lutz
2013-01-01
This study aims to assess the nanotribology research output at global level using scientometric tools. The SCOPUS database was used to retrieve records related to the nanotribology research for the period 1996–2010. Publications were counted on a fractional basis. The level of collaboration and its citation impact were examined. The performance of the most productive countries, institutes and most preferred journals is assessed. Various visualization tools such as the Sci2 tool and Ucinet were employed. The USA ranked top in terms of number of publications, citations per paper and h-index, while Switzerland published a higher percentage of international collaborative papers. The most productive institution was Tsinghua University followed by Ohio State University and Lanzhou Institute of Chemical Physics, CAS. The most preferred journals were Tribology Letters, Wear and Journal of Japanese Society of Tribologists. The result of author keywords analysis reveals that Molecular Dynamics, MEMS, Hard Disk and Diamond like Carbon are major research topics. PMID:24339900
Scientific production of Sports Science in Iran: A Scientometric Analysis.
Yaminfirooz, Mousa; Siamian, Hasan; Jahani, Mohammad Ali; Yaminifirouz, Masoud
2014-06-01
Physical education and sports science is one of the branches of humanities. The purpose of this study is determining the quantitative and qualitative rate of progress in scientific Production of Iran's researcher in Web of Science. Research Methods is Scientometric survey and Statistical Society Includes 233 Documents From 1993 to 2012 are indexed in ISI. Results showed that the time of this study, Iranian researchers' published 233 documents in this base during this period of time which has been cited 1106(4.76 times on average). The H- index has also been 17. Iran's most scientific productions in sports science realm was indexed in 2010 with 57 documents and the least in 2000. By considering the numbers of citations and the obtained H- index, it can be said that the quality of Iranian's articles is rather acceptable but in comparison to prestigious universities and large number of professors and university students in this field, the quantity of outputted articles is very low.
Network Analysis of Publications on Topological Indices from the Web of Science.
Bodlaj, Jernej; Batagelj, Vladimir
2014-08-01
In this paper we analyze a collection of bibliographic networks, constructed from the data from the Web of Science on works (papers, books, etc.) on the topic of topological indices and on relating scientific fields. We present the general outlook and more specific findings about authors, works and journals, subtopics and keywords and also important relations between them based on scientometric approaches like the strongest and main citation paths, the main themes on citation path based on keywords, results of co-authorship analysis in form of the most prominent islands of citing authors, groups of collaborating authors, two-mode cores of authors and works. We investigate the nature of citing of authors, important journals and citing of works between them, journals preferred by authors and expose hierarchy of similar collaborating authors, based on keywords they use. We perform temporal analysis on one important journal as well. We give a comprehensive scientometric insight into the field of topological indices. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Arbesman, Samuel; Laughlin, Gregory
2010-10-04
The search for a habitable extrasolar planet has long interested scientists, but only recently have the tools become available to search for such planets. In the past decades, the number of known extrasolar planets has ballooned into the hundreds, and with it, the expectation that the discovery of the first Earth-like extrasolar planet is not far off. Here, we develop a novel metric of habitability for discovered planets and use this to arrive at a prediction for when the first habitable planet will be discovered. Using a bootstrap analysis of currently discovered exoplanets, we predict the discovery of the first Earth-like planet to be announced in the first half of 2011, with the likeliest date being early May 2011. Our predictions, using only the properties of previously discovered exoplanets, accord well with external estimates for the discovery of the first potentially habitable extrasolar planet and highlight the the usefulness of predictive scientometric techniques to understand the pace of scientific discovery in many fields.
2018-01-01
This article applies scientometric techniques to study the evolution of the field of entrepreneurship between 1990 and 2013. Using a combination of topic mapping, author and journal co-citation analyses, and overlay visualization of new and hot topics in the field, this article makes important contribution to the entrepreneurship research by identifying 46 topics in the 24-year history of entrepreneurship research and demonstrates how they appear, disappear, reappear and stabilize over time. It also identifies five topics that are persistent across the 24-year study period––institutions and institutional entrepreneurship, innovation and technology management, policy and development, entrepreneurial process and opportunity, and new ventures––which I labeled as The Pentagon of Entrepreneurship. Overall, the analyses revealed patterns of convergence and divergence and the diversity of topics, specialization, and interdisciplinary engagement in entrepreneurship research, thus offering the latest insights on the state of the art of the field. PMID:29300735
ERIC Educational Resources Information Center
Lin, Sheau-Wen; Liu, Yu; Chen, Shin-Feng; Wang, Jing-Ru; Kao, Huey-Lien
2016-01-01
The purpose of this study was to develop a computer-based measure of elementary students' science talk and to report students' benchmarks. The development procedure had three steps: defining the framework of the test, collecting and identifying key reference sets of science talk, and developing and verifying the science talk instrument. The…
Validation project. This report describes the procedure used to generate the noise models output dataset , and then it compares that dataset to the...benchmark, the Engineer Research and Development Centers Long-Range Sound Propagation dataset . It was found that the models consistently underpredict the
Dumas, Ryan P; Chreiman, Kristen M; Seamon, Mark J; Cannon, Jeremy W; Reilly, Patrick M; Christie, Jason D; Holena, Daniel N
2018-05-23
Emergency department thoracotomy (EDT) must be rapid and well-executed. Currently there are no defined benchmarks for EDT procedural milestones. We hypothesized that trauma video review (TVR) can be used to define the 'normative EDT' and generate procedural benchmarks. As a secondary aim, we hypothesized that data collected by TVR would have less missingness and bias than data collected by review of the Electronic Medical Record (EMR). We used continuously recording video to review all EDTs performed at our centre during the study period. Using skin incision as start time, we defined four procedural milestones for EDT: 1. Decompression of the right chest (tube thoracostomy, finger thoracostomy, or clamshell thoracotomy with transverse sternotomy performed in conjunction with left anterolateral thoracotomy) 2. Retractor deployment 3. Pericardiotomy 4. Aortic Cross-clamp. EDTs with any milestone time ≥ 75 th percentile of time or during which a milestone was omitted were identified as outliers. We compared rates of missingness in data collected by TVR and EMR using McNemar's test. 44 EDTs were included from the study period. Patients had a median age of 30 [IQR 25-44] and were predominantly African-American (95%) males (93%) with penetrating trauma (95%). From skin incision, median times in minutes to milestones were as follows: right chest decompression: 2.11 [IQR 0.68-2.83], retractor deployment 1.35 [IQR 0.96-1.85], pericardiotomy 2.35 [IQR 1.85-3.75], aortic cross-clamp 3.71 [IQR 2.83-5.77]. In total, 28/44 (64%) of EDTs were either high outliers for one or more benchmarks or had milestones that were omitted. For all milestones, rates of missingness for TVR data were lower than EMR data (p < 0.001). Video review can be used to define normative times for the procedural milestones of EDT. Steps exceeding the 75 th percentile of time were common, with over half of EDTs having at least one milestone as an outlier. Data quality is higher using TVR compared to EMR collection. Future work should seek to determine if minimizing procedural technical outliers improves patient outcomes. Copyright © 2018 Elsevier Ltd. All rights reserved.
Capacity improvement analytical tools and benchmark development for terminal operations
DOT National Transportation Integrated Search
2009-10-01
With U.S. air traffic predicted to triple over the : next fifteen years, new technologies and procedures are : being considered to cope with this growth. As such, it : may be of use to quickly and easily evaluate any new : technologies or procedures ...
49 CFR 1111.2 - Amended and supplemental complaints.
Code of Federal Regulations, 2010 CFR
2010-10-01
... TRANSPORTATION BOARD, DEPARTMENT OF TRANSPORTATION RULES OF PRACTICE COMPLAINT AND INVESTIGATION PROCEDURES... evidence to opt for a different rate reasonableness methodology, among Three-Benchmark, Simplified-SAC or Full-SAC. If so amended, the procedural schedule begins again under the new methodology as set forth at...
NASA Technical Reports Server (NTRS)
Krueger, Ronald
2012-01-01
The development of benchmark examples for quasi-static delamination propagation prediction is presented. The example is based on a finite element model of the Mixed-Mode Bending (MMB) specimen for 50% mode II. The benchmarking is demonstrated for Abaqus/Standard, however, the example is independent of the analysis software used and allows the assessment of the automated delamination propagation prediction capability in commercial finite element codes based on the virtual crack closure technique (VCCT). First, a quasi-static benchmark example was created for the specimen. Second, starting from an initially straight front, the delamination was allowed to propagate under quasi-static loading. Third, the load-displacement as well as delamination length versus applied load/displacement relationships from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. The benchmarking procedure proved valuable by highlighting the issues associated with choosing the input parameters of the particular implementation. Overall, the results are encouraging, but further assessment for mixed-mode delamination fatigue onset and growth is required.
Quality management benchmarking: FDA compliance in pharmaceutical industry.
Jochem, Roland; Landgraf, Katja
2010-01-01
By analyzing and comparing industry and business best practice, processes can be optimized and become more successful mainly because efficiency and competitiveness increase. This paper aims to focus on some examples. Case studies are used to show knowledge exchange in the pharmaceutical industry. Best practice solutions were identified in two companies using a benchmarking method and five-stage model. Despite large administrations, there is much potential regarding business process organization. This project makes it possible for participants to fully understand their business processes. The benchmarking method gives an opportunity to critically analyze value chains (a string of companies or players working together to satisfy market demands for a special product). Knowledge exchange is interesting for companies that like to be global players. Benchmarking supports information exchange and improves competitive ability between different enterprises. Findings suggest that the five-stage model improves efficiency and effectiveness. Furthermore, the model increases the chances for reaching targets. The method gives security to partners that did not have benchmarking experience. The study identifies new quality management procedures. Process management and especially benchmarking is shown to support pharmaceutical industry improvements.
Equilibrium Partitioning Sediment Benchmarks (ESBs) for the ...
This document describes procedures to determine the concentrations of nonionic organic chemicals in sediment interstitial waters. In previous ESB documents, the general equilibrium partitioning (EqP) approach was chosen for the derivation of sediment benchmarks because it accounts for the varying bioavailability of chemicals in different sediments and allows for the incorporation of the appropriate biological effects concentration. This provides for the derivation of benchmarks that are causally linked to the specific chemical, applicable across sediments, and appropriately protective of benthic organisms. This equilibrium partitioning sediment benchmark (ESB) document was prepared by scientists from the Atlantic Ecology Division, Mid-Continent Ecology Division, and Western Ecology Division, the Office of Water, and private consultants. The document describes procedures to determine the interstitial water concentrations of nonionic organic chemicals in contaminated sediments. Based on these concentrations, guidance is provided on the derivation of toxic units to assess whether the sediments are likely to cause adverse effects to benthic organisms. The equilibrium partitioning (EqP) approach was chosen because it is based on the concentrations of chemical(s) that are known to be harmful and bioavailable in the environment. This document, and five others published over the last nine years, will be useful for the Program Offices, including Superfund, a
featsel: A framework for benchmarking of feature selection algorithms and cost functions
NASA Astrophysics Data System (ADS)
Reis, Marcelo S.; Estrela, Gustavo; Ferreira, Carlos Eduardo; Barrera, Junior
In this paper, we introduce featsel, a framework for benchmarking of feature selection algorithms and cost functions. This framework allows the user to deal with the search space as a Boolean lattice and has its core coded in C++ for computational efficiency purposes. Moreover, featsel includes Perl scripts to add new algorithms and/or cost functions, generate random instances, plot graphs and organize results into tables. Besides, this framework already comes with dozens of algorithms and cost functions for benchmarking experiments. We also provide illustrative examples, in which featsel outperforms the popular Weka workbench in feature selection procedures on data sets from the UCI Machine Learning Repository.
Trends of Science Education Research: An Automatic Content Analysis
ERIC Educational Resources Information Center
Chang, Yueh-Hsia; Chang, Chun-Yen; Tseng, Yuen-Hsien
2010-01-01
This study used scientometric methods to conduct an automatic content analysis on the development trends of science education research from the published articles in the four journals of "International Journal of Science Education, Journal of Research in Science Teaching, Research in Science Education, and Science Education" from 1990 to 2007. The…
Higher Education Research in Asia: A Publication and Co-Publication Analysis
ERIC Educational Resources Information Center
Jung, Jisun; Horta, Hugo
2013-01-01
This study explores higher education research in Asia. Drawing on scientometrics, the mapping of science and social network analysis, this paper examines the publications of 38 specialised journals on higher education over the past three decades. The findings indicate a growing number of higher education research publications but the proportion of…
[A scientometric view of Revista Médica de Chile].
Krauskopf, Manuel; Krauskopf, Erwin
2008-08-01
During the last decade Revista Médica de Chile increased its visibility, measured on citations and impact factor. To perform a scientometric analysis to assess the performance of Revista Médica de Chile. Thomson's-ISI Web of Science and Journal Citation Reports QCR) were consulted for performance indicators of Revista Médica de Chile and Latin American journals whose subject is General and Internal Medicine. We also report the h-index of the journal, which infers quality linked to the quantity of the output. According to the h-index, Revista Médica de Chile ranks 4 among the 36 journals indexed and published by Argentina, Brazil, Chile and México. The top ten articles published by Revista Médica de Chile and the institutions with the higher contribution to the journal, were identified using citations. In the Latin American region, Brazil relevantly increased its scientific output. However, Argentina, Chile and México maintain a plateau during the last decade. Revista Médica de Chile increased notoriously its performance. Its contribution to the Chilean scientific community dedicated to Medicine appears to be of central value.
Arbesman, Samuel; Laughlin, Gregory
2010-01-01
Background The search for a habitable extrasolar planet has long interested scientists, but only recently have the tools become available to search for such planets. In the past decades, the number of known extrasolar planets has ballooned into the hundreds, and with it, the expectation that the discovery of the first Earth-like extrasolar planet is not far off. Methodology/Principal Findings Here, we develop a novel metric of habitability for discovered planets and use this to arrive at a prediction for when the first habitable planet will be discovered. Using a bootstrap analysis of currently discovered exoplanets, we predict the discovery of the first Earth-like planet to be announced in the first half of 2011, with the likeliest date being early May 2011. Conclusions/Significance Our predictions, using only the properties of previously discovered exoplanets, accord well with external estimates for the discovery of the first potentially habitable extrasolar planet and highlight the the usefulness of predictive scientometric techniques to understand the pace of scientific discovery in many fields. PMID:20957226
Fetisov, V A; Smirenin, S A; Nesterov, A V; Khabova, Z S
2014-01-01
The authors undertook the scientometric analysis of the articles published in the journal "Sudebno-meditsinskaya ekspertiza" during the last 55 years (from 1958 to 2012) with special reference to the information support of research and practical activities of forensic medical experts in this country concerning the topical problems of the car accident injury. The search for relevant information revealed a total of 111 articles that were categorized into several groups for their further systematization and analysis with the view for improving the effectiveness of research and experimental studies in the framework of the principal activities of the State Sanitary and Epidemiological Department of the Russian Federation. This article is an extension of previous publications of the authors concerning the main aspects of the car accident injury. The forthcoming reports to be published in the journal "Sudebno-meditsinskaya ekspertiza" will present the results of the further in-depth scientometric analysis of the data on road accidents in this country.
Pulikottil-Jacob, Ruth; Connock, Martin; Kandala, Ngianga-Bakwin; Mistry, Hema; Grove, Amy; Freeman, Karoline; Costa, Matthew; Sutcliffe, Paul; Clarke, Aileen
2016-01-01
Total hip replacement for end stage arthritis of the hip is currently the most common elective surgical procedure. In 2007 about 7.5% of UK implants were metal-on-metal joint resurfacing (MoM RS) procedures. Due to poor revision performance and concerns about metal debris, the use of RS had declined by 2012 to about a 1% share of UK hip procedures. This study estimated the lifetime cost-effectiveness of metal-on-metal resurfacing (RS) procedures versus commonly employed total hip replacement (THR) methods. We performed a cost-utility analysis using a well-established multi-state semi-Markov model from an NHS and personal and social services perspective. We used individual patient data (IPD) from the National Joint Registry (NJR) for England and Wales on RS and THR surgery for osteoarthritis recorded from April 2003 to December 2012. We used flexible parametric modelling of NJR RS data to guide identification of patient subgroups and RS devices which delivered revision rates within the NICE 5% revision rate benchmark at 10 years. RS procedures overall have an estimated revision rate of 13% at 10 years, compared to <4% for most THR devices. New NICE guidance now recommends a revision rate benchmark of <5% at 10 years. 60% of RS implants in men and 2% in women were predicted to be within the revision benchmark. RS devices satisfying the 5% benchmark were unlikely to be cost-effective compared to THR at a standard UK willingness to pay of £20,000 per quality-adjusted life-year. However, the probability of cost effectiveness was sensitive to small changes in the costs of devices or in quality of life or revision rate estimates. Our results imply that in most cases RS has not been a cost-effective resource and should probably not be adopted by decision makers concerned with the cost effectiveness of hip replacement, or by patients concerned about the likelihood of revision, regardless of patient age or gender.
Benchmark Dataset for Whole Genome Sequence Compression.
C L, Biji; S Nair, Achuthsankar
2017-01-01
The research in DNA data compression lacks a standard dataset to test out compression tools specific to DNA. This paper argues that the current state of achievement in DNA compression is unable to be benchmarked in the absence of such scientifically compiled whole genome sequence dataset and proposes a benchmark dataset using multistage sampling procedure. Considering the genome sequence of organisms available in the National Centre for Biotechnology and Information (NCBI) as the universe, the proposed dataset selects 1,105 prokaryotes, 200 plasmids, 164 viruses, and 65 eukaryotes. This paper reports the results of using three established tools on the newly compiled dataset and show that their strength and weakness are evident only with a comparison based on the scientifically compiled benchmark dataset. The sample dataset and the respective links are available @ https://sourceforge.net/projects/benchmarkdnacompressiondataset/.
NASA Astrophysics Data System (ADS)
Izah Anuar, Nurul; Saptari, Adi
2016-02-01
This paper addresses the types of particle representation (encoding) procedures in a population-based stochastic optimization technique in solving scheduling problems known in the job-shop manufacturing environment. It intends to evaluate and compare the performance of different particle representation procedures in Particle Swarm Optimization (PSO) in the case of solving Job-shop Scheduling Problems (JSP). Particle representation procedures refer to the mapping between the particle position in PSO and the scheduling solution in JSP. It is an important step to be carried out so that each particle in PSO can represent a schedule in JSP. Three procedures such as Operation and Particle Position Sequence (OPPS), random keys representation and random-key encoding scheme are used in this study. These procedures have been tested on FT06 and FT10 benchmark problems available in the OR-Library, where the objective function is to minimize the makespan by the use of MATLAB software. Based on the experimental results, it is discovered that OPPS gives the best performance in solving both benchmark problems. The contribution of this paper is the fact that it demonstrates to the practitioners involved in complex scheduling problems that different particle representation procedures can have significant effects on the performance of PSO in solving JSP.
A scientometric examination of the water quality research in India.
Nishy, P; Saroja, Renuka
2018-03-16
Water quality has emerged as a fast-developing research area. Regular assessment of research activity is necessary for the successful R&D promotion. Water quality research work carried out in different countries increased over the years, and the USA ranked first in productivity while India stands in the seventh position in quantity and occupies the ninth position in quality of the research output. India observes a steady growth in the water quality research. Four thousand six hundred sixteen articles from India assessed from the aspect of citations received distributions of source countries, institutes, journals, impact factor, words in the title, author keywords. The qualitative and quantitative analysis identifies the contributions of the major institutions involved in research. Much of the country's water quality research is carried out by universities, public research institutions and science councils, whereas the contribution from Ministry of water resources not so significant. A considerable portion of Indian research is communicated through foreign journals, and the most active one is Environmental Monitoring and Assessment journal. Twenty-one percent of work is reported in journals published from India and around 7% ages in open access journals. The study highlights that international collaborative research resulted in high-quality papers. The authors meticulously analyse the published research works to gain a deeper understanding of focus areas through word cluster analyses on title words and keywords. When many papers deal with 'contamination', 'assessment' and 'treatment', enough studies done on 'water quality index', 'toxicity', considerable work is carried out in environmental, agricultural, industrial and health problems related to water quality. This detailed scientometric study from 1,09,766 research works from SCI-E during 1986-2015 plots the trends and identifies research hotspots for the benefit to scientists in the subject area. This study comprehends the magnitude of water quality research also establishes future research directions using various scientometric indicators.
International scientific communications in the field of colorectal tumour markers.
Ivanov, Krasimir; Donev, Ivan
2017-05-27
To analyze scientometrically the dynamic science internationalization on colorectal tumour markers as reflected in five information portals and to outline the significant journals, scientists and institutions. A retrospective problem-oriented search was performed in Web of Science Core Collection (WoS), MEDLINE, BIOSIS Citation Index (BIOSIS) and Scopus for 1986-2015 as well as in Dervent Innovations Index (Derwent) for 1995-2015. Several specific scientometric parameters of the publication output and citation activity were comparatively analyzed. The following scientometric parameters were analyzed: (1) annual dynamics of publications; (2) scientific institutions; (3) journals; (4) authors; (5) scientific forums; (6) patents - number of patents, names and countries of inventors, and (7) citations (number of citations to publications by single authors received in WoS, BIOSIS Citation Index and Scopus). There is a trend towards increasing publication output on colorectal tumour markers worldwide along with high citation rates. Authors from 70 countries have published their research results in journals and conference proceedings in 21 languages. There is considerable country stratification similar to that in most systematic investigations. The information provided to end users and scientometricians varies between these data-bases in terms of most parameters due to different journal coverage, indexing systems and editorial policy. The lists of the so-called "core" journals and most productive authors in WoS, BIOSIS, MEDLINE and Scopus along with the list of the most productive authors - inventors in Derwent present a particular interest to the beginners in the field, the institutional and national science managers and the journal editorial board members. The role of the purposeful assessment of scientific forums and patents is emphasized. Our results along with this problem-oriented collection containing the researchers' names, addresses and publications could contribute to a more effective international collaboration of the coloproctologists from smaller countries and thus improve their visibility on the world information market.
International scientific communications in the field of colorectal tumour markers
Ivanov, Krasimir; Donev, Ivan
2017-01-01
AIM To analyze scientometrically the dynamic science internationalization on colorectal tumour markers as reflected in five information portals and to outline the significant journals, scientists and institutions. METHODS A retrospective problem-oriented search was performed in Web of Science Core Collection (WoS), MEDLINE, BIOSIS Citation Index (BIOSIS) and Scopus for 1986-2015 as well as in Dervent Innovations Index (Derwent) for 1995-2015. Several specific scientometric parameters of the publication output and citation activity were comparatively analyzed. The following scientometric parameters were analyzed: (1) annual dynamics of publications; (2) scientific institutions; (3) journals; (4) authors; (5) scientific forums; (6) patents - number of patents, names and countries of inventors, and (7) citations (number of citations to publications by single authors received in WoS, BIOSIS Citation Index and Scopus). RESULTS There is a trend towards increasing publication output on colorectal tumour markers worldwide along with high citation rates. Authors from 70 countries have published their research results in journals and conference proceedings in 21 languages. There is considerable country stratification similar to that in most systematic investigations. The information provided to end users and scientometricians varies between these data-bases in terms of most parameters due to different journal coverage, indexing systems and editorial policy. The lists of the so-called “core” journals and most productive authors in WoS, BIOSIS, MEDLINE and Scopus along with the list of the most productive authors - inventors in Derwent present a particular interest to the beginners in the field, the institutional and national science managers and the journal editorial board members. The role of the purposeful assessment of scientific forums and patents is emphasized. CONCLUSION Our results along with this problem-oriented collection containing the researchers’ names, addresses and publications could contribute to a more effective international collaboration of the coloproctologists from smaller countries and thus improve their visibility on the world information market. PMID:28603585
What Is Citizen Science? – A Scientometric Meta-Analysis
Kullenberg, Christopher; Kasperowski, Dick
2016-01-01
Context The concept of citizen science (CS) is currently referred to by many actors inside and outside science and research. Several descriptions of this purportedly new approach of science are often heard in connection with large datasets and the possibilities of mobilizing crowds outside science to assists with observations and classifications. However, other accounts refer to CS as a way of democratizing science, aiding concerned communities in creating data to influence policy and as a way of promoting political decision processes involving environment and health. Objective In this study we analyse two datasets (N = 1935, N = 633) retrieved from the Web of Science (WoS) with the aim of giving a scientometric description of what the concept of CS entails. We account for its development over time, and what strands of research that has adopted CS and give an assessment of what scientific output has been achieved in CS-related projects. To attain this, scientometric methods have been combined with qualitative approaches to render more precise search terms. Results Results indicate that there are three main focal points of CS. The largest is composed of research on biology, conservation and ecology, and utilizes CS mainly as a methodology of collecting and classifying data. A second strand of research has emerged through geographic information research, where citizens participate in the collection of geographic data. Thirdly, there is a line of research relating to the social sciences and epidemiology, which studies and facilitates public participation in relation to environmental issues and health. In terms of scientific output, the largest body of articles are to be found in biology and conservation research. In absolute numbers, the amount of publications generated by CS is low (N = 1935), but over the past decade a new and very productive line of CS based on digital platforms has emerged for the collection and classification of data. PMID:26766577
What Is Citizen Science?--A Scientometric Meta-Analysis.
Kullenberg, Christopher; Kasperowski, Dick
2016-01-01
The concept of citizen science (CS) is currently referred to by many actors inside and outside science and research. Several descriptions of this purportedly new approach of science are often heard in connection with large datasets and the possibilities of mobilizing crowds outside science to assists with observations and classifications. However, other accounts refer to CS as a way of democratizing science, aiding concerned communities in creating data to influence policy and as a way of promoting political decision processes involving environment and health. In this study we analyse two datasets (N = 1935, N = 633) retrieved from the Web of Science (WoS) with the aim of giving a scientometric description of what the concept of CS entails. We account for its development over time, and what strands of research that has adopted CS and give an assessment of what scientific output has been achieved in CS-related projects. To attain this, scientometric methods have been combined with qualitative approaches to render more precise search terms. Results indicate that there are three main focal points of CS. The largest is composed of research on biology, conservation and ecology, and utilizes CS mainly as a methodology of collecting and classifying data. A second strand of research has emerged through geographic information research, where citizens participate in the collection of geographic data. Thirdly, there is a line of research relating to the social sciences and epidemiology, which studies and facilitates public participation in relation to environmental issues and health. In terms of scientific output, the largest body of articles are to be found in biology and conservation research. In absolute numbers, the amount of publications generated by CS is low (N = 1935), but over the past decade a new and very productive line of CS based on digital platforms has emerged for the collection and classification of data.
Sedlack, Robert E; Coyle, Walter J
2016-03-01
The Mayo Colonoscopy Skills Assessment Tool (MCSAT) has previously been used to describe learning curves and competency benchmarks for colonoscopy; however, these data were limited to a single training center. The newer Assessment of Competency in Endoscopy (ACE) tool is a refinement of the MCSAT tool put forth by the Training Committee of the American Society for Gastrointestinal Endoscopy, intended to include additional important quality metrics. The goal of this study is to validate the changes made by updating this tool and establish more generalizable and reliable learning curves and competency benchmarks for colonoscopy by examining a larger national cohort of trainees. In a prospective, multicenter trial, gastroenterology fellows at all stages of training had their core cognitive and motor skills in colonoscopy assessed by staff. Evaluations occurred at set intervals of every 50 procedures throughout the 2013 to 2014 academic year. Skills were graded by using the ACE tool, which uses a 4-point grading scale defining the continuum from novice to competent. Average learning curves for each skill were established at each interval in training and competency benchmarks for each skill were established using the contrasting groups method. Ninety-three gastroenterology fellows at 10 U.S. academic institutions had 1061 colonoscopies assessed by using the ACE tool. Average scores of 3.5 were found to be inclusive of all minimal competency thresholds identified for each core skill. Cecal intubation times of less than 15 minutes and independent cecal intubation rates of 90% were also identified as additional competency thresholds during analysis. The average fellow achieved all cognitive and motor skill endpoints by 250 procedures, with >90% surpassing these thresholds by 300 procedures. Nationally generalizable learning curves for colonoscopy skills in gastroenterology fellows are described. Average ACE scores of 3.5, cecal intubation rates of 90%, and intubation times less than 15 minutes are recommended as minimal competency criteria. On average, it takes 250 procedures to achieve competence in colonoscopy. The thresholds found in this multicenter cohort by using the ACE tool are nearly identical to the previously established MCSAT benchmarks and are consistent with recent gastroenterology training recommendations but far higher than current training requirements in other specialties. Copyright © 2016 American Society for Gastrointestinal Endoscopy. Published by Elsevier Inc. All rights reserved.
The development of a virtual reality training curriculum for colonoscopy.
Sugden, Colin; Aggarwal, Rajesh; Banerjee, Amrita; Haycock, Adam; Thomas-Gibson, Siwan; Williams, Christopher B; Darzi, Ara
2012-07-01
The development of a structured virtual reality (VR) training curriculum for colonoscopy using high-fidelity simulation. Colonoscopy requires detailed knowledge and technical skill. Changes to working practices in recent times have reduced the availability of traditional training opportunities. Much might, therefore, be achieved by applying novel technologies such as VR simulation to colonoscopy. Scientifically developed device-specific curricula aim to maximize the yield of laboratory-based training by focusing on validated modules and linking progression to the attainment of benchmarked proficiency criteria. Fifty participants comprised of 30 novices (<10 colonoscopies), 10 intermediates (100 to 500 colonoscopies), and 10 experienced (>500 colonoscopies) colonoscopists were recruited to participate. Surrogates of proficiency, such as number of procedures undertaken, determined prospective allocation to 1 of 3 groups (novice, intermediate, and experienced). Construct validity and learning value (comparison between groups and within groups respectively) for each task and metric on the chosen simulator model determined suitability for inclusion in the curriculum. Eight tasks in possession of construct validity and significant learning curves were included in the curriculum: 3 abstract tasks, 4 part-procedural tasks, and 1 procedural task. The whole-procedure task was valid for 11 metrics including the following: "time taken to complete the task" (1238, 343, and 293 s; P < 0.001) and "insertion length with embedded tip" (23.8, 3.6, and 4.9 cm; P = 0.005). Learning curves consistently plateaued at or beyond the ninth attempt. Valid metrics were used to define benchmarks, derived from the performance of the experienced cohort, for each included task. A comprehensive, stratified, benchmarked, whole-procedure curriculum has been developed for a modern high-fidelity VR colonoscopy simulator.
Stuart, Lauren N; Volmar, Keith E; Nowak, Jan A; Fatheree, Lisa A; Souers, Rhona J; Fitzgibbons, Patrick L; Goldsmith, Jeffrey D; Astles, J Rex; Nakhleh, Raouf E
2017-09-01
- A cooperative agreement between the College of American Pathologists (CAP) and the United States Centers for Disease Control and Prevention was undertaken to measure laboratories' awareness and implementation of an evidence-based laboratory practice guideline (LPG) on immunohistochemical (IHC) validation practices published in 2014. - To establish new benchmark data on IHC laboratory practices. - A 2015 survey on IHC assay validation practices was sent to laboratories subscribed to specific CAP proficiency testing programs and to additional nonsubscribing laboratories that perform IHC testing. Specific questions were designed to capture laboratory practices not addressed in a 2010 survey. - The analysis was based on responses from 1085 laboratories that perform IHC staining. Ninety-six percent (809 of 844) always documented validation of IHC assays. Sixty percent (648 of 1078) had separate procedures for predictive and nonpredictive markers, 42.7% (220 of 515) had procedures for laboratory-developed tests, 50% (349 of 697) had procedures for testing cytologic specimens, and 46.2% (363 of 785) had procedures for testing decalcified specimens. Minimum case numbers were specified by 85.9% (720 of 838) of laboratories for nonpredictive markers and 76% (584 of 768) for predictive markers. Median concordance requirements were 95% for both types. For initial validation, 75.4% (538 of 714) of laboratories adopted the 20-case minimum for nonpredictive markers and 45.9% (266 of 579) adopted the 40-case minimum for predictive markers as outlined in the 2014 LPG. The most common method for validation was correlation with morphology and expected results. Laboratories also reported which assay changes necessitated revalidation and their minimum case requirements. - Benchmark data on current IHC validation practices and procedures may help laboratories understand the issues and influence further refinement of LPG recommendations.
DRG benchmarking study establishes national coding norms.
Vaul, J H
1998-05-01
With the increase in fraud and abuse investigations, healthcare financial managers should examine their organization's medical record coding procedures. The Federal government and third-party payers are looking specifically for improper billing of outpatient services, unbundling of procedures to increase payment, assigning higher-paying DRG codes for inpatient claims, and other abuses. A recent benchmarking study of Medicare Provider Analysis and Review (MEDPAR) data has established national norms for hospital coding and case mix based on DRGs and has revealed the majority of atypical coding cases fall into six DRG pairs. Organizations with a greater percentage of atypical cases--those more likely to be scrutinized by Federal investigators--will want to conduct suitable review and be sure appropriate documentation exists to justify the coding.
Dynamic vehicle routing with time windows in theory and practice.
Yang, Zhiwei; van Osta, Jan-Paul; van Veen, Barry; van Krevelen, Rick; van Klaveren, Richard; Stam, Andries; Kok, Joost; Bäck, Thomas; Emmerich, Michael
2017-01-01
The vehicle routing problem is a classical combinatorial optimization problem. This work is about a variant of the vehicle routing problem with dynamically changing orders and time windows. In real-world applications often the demands change during operation time. New orders occur and others are canceled. In this case new schedules need to be generated on-the-fly. Online optimization algorithms for dynamical vehicle routing address this problem but so far they do not consider time windows. Moreover, to match the scenarios found in real-world problems adaptations of benchmarks are required. In this paper, a practical problem is modeled based on the procedure of daily routing of a delivery company. New orders by customers are introduced dynamically during the working day and need to be integrated into the schedule. A multiple ant colony algorithm combined with powerful local search procedures is proposed to solve the dynamic vehicle routing problem with time windows. The performance is tested on a new benchmark based on simulations of a working day. The problems are taken from Solomon's benchmarks but a certain percentage of the orders are only revealed to the algorithm during operation time. Different versions of the MACS algorithm are tested and a high performing variant is identified. Finally, the algorithm is tested in situ: In a field study, the algorithm schedules a fleet of cars for a surveillance company. We compare the performance of the algorithm to that of the procedure used by the company and we summarize insights gained from the implementation of the real-world study. The results show that the multiple ant colony algorithm can get a much better solution on the academic benchmark problem and also can be integrated in a real-world environment.
A Benchmark for Endoluminal Scene Segmentation of Colonoscopy Images.
Vázquez, David; Bernal, Jorge; Sánchez, F Javier; Fernández-Esparrach, Gloria; López, Antonio M; Romero, Adriana; Drozdzal, Michal; Courville, Aaron
2017-01-01
Colorectal cancer (CRC) is the third cause of cancer death worldwide. Currently, the standard approach to reduce CRC-related mortality is to perform regular screening in search for polyps and colonoscopy is the screening tool of choice. The main limitations of this screening procedure are polyp miss rate and the inability to perform visual assessment of polyp malignancy. These drawbacks can be reduced by designing decision support systems (DSS) aiming to help clinicians in the different stages of the procedure by providing endoluminal scene segmentation. Thus, in this paper, we introduce an extended benchmark of colonoscopy image segmentation, with the hope of establishing a new strong benchmark for colonoscopy image analysis research. The proposed dataset consists of 4 relevant classes to inspect the endoluminal scene, targeting different clinical needs. Together with the dataset and taking advantage of advances in semantic segmentation literature, we provide new baselines by training standard fully convolutional networks (FCNs). We perform a comparative study to show that FCNs significantly outperform, without any further postprocessing, prior results in endoluminal scene segmentation, especially with respect to polyp segmentation and localization.
NASA Technical Reports Server (NTRS)
Mason, Gregory S.; Berg, Martin C.; Mukhopadhyay, Vivek
2002-01-01
To study the effectiveness of various control system design methodologies, the NASA Langley Research Center initiated the Benchmark Active Controls Project. In this project, the various methodologies were applied to design a flutter suppression system for the Benchmark Active Controls Technology (BACT) Wing. This report describes a project at the University of Washington to design a multirate suppression system for the BACT wing. The objective of the project was two fold. First, to develop a methodology for designing robust multirate compensators, and second, to demonstrate the methodology by applying it to the design of a multirate flutter suppression system for the BACT wing.
Developing a benchmark for emotional analysis of music
Yang, Yi-Hsuan; Soleymani, Mohammad
2017-01-01
Music emotion recognition (MER) field rapidly expanded in the last decade. Many new methods and new audio features are developed to improve the performance of MER algorithms. However, it is very difficult to compare the performance of the new methods because of the data representation diversity and scarcity of publicly available data. In this paper, we address these problems by creating a data set and a benchmark for MER. The data set that we release, a MediaEval Database for Emotional Analysis in Music (DEAM), is the largest available data set of dynamic annotations (valence and arousal annotations for 1,802 songs and song excerpts licensed under Creative Commons with 2Hz time resolution). Using DEAM, we organized the ‘Emotion in Music’ task at MediaEval Multimedia Evaluation Campaign from 2013 to 2015. The benchmark attracted, in total, 21 active teams to participate in the challenge. We analyze the results of the benchmark: the winning algorithms and feature-sets. We also describe the design of the benchmark, the evaluation procedures and the data cleaning and transformations that we suggest. The results from the benchmark suggest that the recurrent neural network based approaches combined with large feature-sets work best for dynamic MER. PMID:28282400
Developing a benchmark for emotional analysis of music.
Aljanaki, Anna; Yang, Yi-Hsuan; Soleymani, Mohammad
2017-01-01
Music emotion recognition (MER) field rapidly expanded in the last decade. Many new methods and new audio features are developed to improve the performance of MER algorithms. However, it is very difficult to compare the performance of the new methods because of the data representation diversity and scarcity of publicly available data. In this paper, we address these problems by creating a data set and a benchmark for MER. The data set that we release, a MediaEval Database for Emotional Analysis in Music (DEAM), is the largest available data set of dynamic annotations (valence and arousal annotations for 1,802 songs and song excerpts licensed under Creative Commons with 2Hz time resolution). Using DEAM, we organized the 'Emotion in Music' task at MediaEval Multimedia Evaluation Campaign from 2013 to 2015. The benchmark attracted, in total, 21 active teams to participate in the challenge. We analyze the results of the benchmark: the winning algorithms and feature-sets. We also describe the design of the benchmark, the evaluation procedures and the data cleaning and transformations that we suggest. The results from the benchmark suggest that the recurrent neural network based approaches combined with large feature-sets work best for dynamic MER.
Science and Technology Text Mining: Origins of Database Tomography and Multi-Word Phrase Clustering
2003-08-15
six decades to the pioneering work in: 1) lexicography of Hornby [1942] to account for co- occurrence knowledge, and 2) linguistics of De Saussure ...of Development in a Research Field," Scientometrics, Vol.19, No.1, 1990b. De Saussure , F., "Cours de Linguistique Generale," 4eme Edition, Librairie
Inciting the Metric Oriented Humanist: Teaching Bibliometrics in a Faculty of Humanities
ERIC Educational Resources Information Center
Zuccala, Alesia
2016-01-01
In the past few decades the core of bibliometrics has predominantly been "scientometric" in nature, due to the first commercial citation index having been created for scientific journals and articles. The production of citation indexes for books implies that proper education related to their use is now becoming critical. A new breed of…
Similarity Measures in Scientometric Research: The Jaccard Index versus Salton's Cosine Formula.
ERIC Educational Resources Information Center
Hamers, Lieve; And Others
1989-01-01
Describes two similarity measures used in citation and co-citation analysis--the Jaccard index and Salton's cosine formula--and investigates the relationship between the two measures. It is shown that Salton's formula yields a numerical value that is twice Jaccard's index in most cases, and an explanation is offered. (13 references) (CLB)
ERIC Educational Resources Information Center
Kalz, Marco; Specht, Marcus
2014-01-01
This paper deals with the assessment of the crossdisciplinarity of technology-enhanced learning (TEL). Based on a general discussion of the concept interdisciplinarity and a summary of the discussion in the field, two empirical methods from scientometrics are introduced and applied. Science overlay maps and the Rao-Stirling diversity index are…
USDA-ARS?s Scientific Manuscript database
Standard area diagrams (SADs) have long been used as a tool to aid the estimation of plant disease severity, an essential variable in phytopathometry. Formal validation of SADs was not considered prior to the early 1990s, when considerable effort began to be invested developing SADs and assessing th...
Willemse, Elias J; Joubert, Johan W
2016-09-01
In this article we present benchmark datasets for the Mixed Capacitated Arc Routing Problem under Time restrictions with Intermediate Facilities (MCARPTIF). The problem is a generalisation of the Capacitated Arc Routing Problem (CARP), and closely represents waste collection routing. Four different test sets are presented, each consisting of multiple instance files, and which can be used to benchmark different solution approaches for the MCARPTIF. An in-depth description of the datasets can be found in "Constructive heuristics for the Mixed Capacity Arc Routing Problem under Time Restrictions with Intermediate Facilities" (Willemseand Joubert, 2016) [2] and "Splitting procedures for the Mixed Capacitated Arc Routing Problem under Time restrictions with Intermediate Facilities" (Willemseand Joubert, in press) [4]. The datasets are publicly available from "Library of benchmark test sets for variants of the Capacitated Arc Routing Problem under Time restrictions with Intermediate Facilities" (Willemse and Joubert, 2016) [3].
Experimental power density distribution benchmark in the TRIGA Mark II reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snoj, L.; Stancar, Z.; Radulovic, V.
2012-07-01
In order to improve the power calibration process and to benchmark the existing computational model of the TRIGA Mark II reactor at the Josef Stefan Inst. (JSI), a bilateral project was started as part of the agreement between the French Commissariat a l'energie atomique et aux energies alternatives (CEA) and the Ministry of higher education, science and technology of Slovenia. One of the objectives of the project was to analyze and improve the power calibration process of the JSI TRIGA reactor (procedural improvement and uncertainty reduction) by using absolutely calibrated CEA fission chambers (FCs). This is one of the fewmore » available power density distribution benchmarks for testing not only the fission rate distribution but also the absolute values of the fission rates. Our preliminary calculations indicate that the total experimental uncertainty of the measured reaction rate is sufficiently low that the experiments could be considered as benchmark experiments. (authors)« less
48 CFR 970.4402-2 - General requirements.
Code of Federal Regulations, 2014 CFR
2014-10-01
... techniques such as partnering agreements, ombudsmen, and alternative disputes procedures; (6) Use of self-assessment and benchmarking techniques to support continuous improvement in purchasing; (7) Maintenance of...
48 CFR 970.4402-2 - General requirements.
Code of Federal Regulations, 2011 CFR
2011-10-01
... techniques such as partnering agreements, ombudsmen, and alternative disputes procedures; (6) Use of self-assessment and benchmarking techniques to support continuous improvement in purchasing; (7) Maintenance of...
48 CFR 970.4402-2 - General requirements.
Code of Federal Regulations, 2012 CFR
2012-10-01
... techniques such as partnering agreements, ombudsmen, and alternative disputes procedures; (6) Use of self-assessment and benchmarking techniques to support continuous improvement in purchasing; (7) Maintenance of...
48 CFR 970.4402-2 - General requirements.
Code of Federal Regulations, 2013 CFR
2013-10-01
... techniques such as partnering agreements, ombudsmen, and alternative disputes procedures; (6) Use of self-assessment and benchmarking techniques to support continuous improvement in purchasing; (7) Maintenance of...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Grace L.; Department of Health Services Research, The University of Texas MD Anderson Cancer Center, Houston, Texas; Jiang, Jing
Purpose: High-quality treatment for intact cervical cancer requires external radiation therapy, brachytherapy, and chemotherapy, carefully sequenced and completed without delays. We sought to determine how frequently current treatment meets quality benchmarks and whether new technologies have influenced patterns of care. Methods and Materials: By searching diagnosis and procedure claims in MarketScan, an employment-based health care claims database, we identified 1508 patients with nonmetastatic, intact cervical cancer treated from 1999 to 2011, who were <65 years of age and received >10 fractions of radiation. Treatments received were identified using procedure codes and compared with 3 quality benchmarks: receipt of brachytherapy, receipt ofmore » chemotherapy, and radiation treatment duration not exceeding 63 days. The Cochran-Armitage test was used to evaluate temporal trends. Results: Seventy-eight percent of patients (n=1182) received brachytherapy, with brachytherapy receipt stable over time (Cochran-Armitage P{sub trend}=.15). Among patients who received brachytherapy, 66% had high–dose rate and 34% had low–dose rate treatment, although use of high–dose rate brachytherapy steadily increased to 75% by 2011 (P{sub trend}<.001). Eighteen percent of patients (n=278) received intensity modulated radiation therapy (IMRT), and IMRT receipt increased to 37% by 2011 (P{sub trend}<.001). Only 2.5% of patients (n=38) received IMRT in the setting of brachytherapy omission. Overall, 79% of patients (n=1185) received chemotherapy, and chemotherapy receipt increased to 84% by 2011 (P{sub trend}<.001). Median radiation treatment duration was 56 days (interquartile range, 47-65 days); however, duration exceeded 63 days in 36% of patients (n=543). Although 98% of patients received at least 1 benchmark treatment, only 44% received treatment that met all 3 benchmarks. With more stringent indicators (brachytherapy, ≥4 chemotherapy cycles, and duration not exceeding 56 days), only 25% of patients received treatment that met all benchmarks. Conclusion: In this cohort, most cervical cancer patients received treatment that did not comply with all 3 benchmarks for quality treatment. In contrast to increasing receipt of newer radiation technologies, there was little improvement in receipt of essential treatment benchmarks.« less
Benchmarking of surgical complications in gynaecological oncology: prospective multicentre study.
Burnell, M; Iyer, R; Gentry-Maharaj, A; Nordin, A; Liston, R; Manchanda, R; Das, N; Gornall, R; Beardmore-Gray, A; Hillaby, K; Leeson, S; Linder, A; Lopes, A; Meechan, D; Mould, T; Nevin, J; Olaitan, A; Rufford, B; Shanbhag, S; Thackeray, A; Wood, N; Reynolds, K; Ryan, A; Menon, U
2016-12-01
To explore the impact of risk-adjustment on surgical complication rates (CRs) for benchmarking gynaecological oncology centres. Prospective cohort study. Ten UK accredited gynaecological oncology centres. Women undergoing major surgery on a gynaecological oncology operating list. Patient co-morbidity, surgical procedures and intra-operative (IntraOp) complications were recorded contemporaneously by surgeons for 2948 major surgical procedures. Postoperative (PostOp) complications were collected from hospitals and patients. Risk-prediction models for IntraOp and PostOp complications were created using penalised (lasso) logistic regression using over 30 potential patient/surgical risk factors. Observed and risk-adjusted IntraOp and PostOp CRs for individual hospitals were calculated. Benchmarking using colour-coded funnel plots and observed-to-expected ratios was undertaken. Overall, IntraOp CR was 4.7% (95% CI 4.0-5.6) and PostOp CR was 25.7% (95% CI 23.7-28.2). The observed CRs for all hospitals were under the upper 95% control limit for both IntraOp and PostOp funnel plots. Risk-adjustment and use of observed-to-expected ratio resulted in one hospital moving to the >95-98% CI (red) band for IntraOp CRs. Use of only hospital-reported data for PostOp CRs would have resulted in one hospital being unfairly allocated to the red band. There was little concordance between IntraOp and PostOp CRs. The funnel plots and overall IntraOp (≈5%) and PostOp (≈26%) CRs could be used for benchmarking gynaecological oncology centres. Hospital benchmarking using risk-adjusted CRs allows fairer institutional comparison. IntraOp and PostOp CRs are best assessed separately. As hospital under-reporting is common for postoperative complications, use of patient-reported outcomes is important. Risk-adjusted benchmarking of surgical complications for ten UK gynaecological oncology centres allows fairer comparison. © 2016 Royal College of Obstetricians and Gynaecologists.
Albrecht, A; Levenson, B; Göhring, S; Haerer, W; Reifart, N; Ringwald, G; Troger, B
2009-10-01
QuIK is the German acronym for QUality Assurance in Invasive Cardiology. It describes the continuous project of an electronic data collection in Cardiac catheterization laboratories all over Germany. Mainly members of the German Society of Cardiologists in Private Practice (BNK) participate in this computer based project. Since 1996 data of diagnostic and interventional procedures are collected and send to a registry-center where a regular benchmarking analysis of the results is performed. Part of the project is a yearly auditing process including an on-site visit to the cath lab to guarantee for the reliability of information collected. Since 1996 about one million procedures have been documented. Georg Thieme Verlag KG Stuttgart , New York.
Moving template analysis of crack growth. 1: Procedure development
NASA Astrophysics Data System (ADS)
Padovan, Joe; Guo, Y. H.
1994-06-01
Based on a moving template procedure, this two part series will develop a method to follow the crack tip physics in a self-adaptive manner which provides a uniformly accurate prediction of crack growth. For multiple crack environments, this is achieved by attaching a moving template to each crack tip. The templates are each individually oriented to follow the associated growth orientation and rate. In this part, the essentials of the procedure are derived for application to fatigue crack environments. Overall the scheme derived possesses several hierarchical levels, i.e. the global model, the interpolatively tied moving template, and a multilevel element death option to simulate the crack wake. To speed up computation, the hierarchical polytree scheme is used to reorganize the global stiffness inversion process. In addition to developing the various features of the scheme, the accuracy of predictions for various crack lengths is also benchmarked. Part 2 extends the scheme to multiple crack problems. Extensive benchmarking is also presented to verify the scheme.
Vogelzang, B. H.; Scutaru, C.; Mache, S.; Vitzthum, K.; Quarcoo, David; Groneberg, D. A.
2011-01-01
Background: Depression is a major cause of suicide worldwide. This association has been reflected by numerous scientific publications reporting about studies to this theme. There is currently no overall evaluation of the global research activities in this field. Aim: The aim of the current study was to analyze long-term developments and recent research trends in this area. Material and Methods: We searched the Web of Science databases developed by the Thompson Institute of Scientific Information for items concerning depression and suicide published between 1900 and 2007 and analyzed the results using scientometric methods and density-equalizing calculations. Results: We found that publications on this topic increased dramatically in the time period 1990 to 2007. The comparison of the different Journals showed that the Archives of General Psychiatry had the highest average citation rate (more than twice that of any other Journal). When comparing authors, we found that not all the authors who had high h-indexes cooperated much with other authors. The analysis of countries who published papers on this topic showed that they published papers in relation to their Gross Domestic Product and Purchasing Power Parity. Among the G8 countries, Russia had the highest male suicide rate in 1999 (more than twice that of any of the other G8 countries), despite having published least papers and cooperating least with other countries among the G8. Conclusion: We conclude that, although there has been an increase in publications on this topic from 1990 to 2006, this increase is of a lower gradient than that of psoriasis and rheumatoid arthritis. PMID:22021955
ERIC Educational Resources Information Center
Olijnyk, Nicholas Victor
2014-01-01
The central aim of the current research is to explore and describe the profile, dynamics, and structure of the information security specialty. This study's objectives are guided by four research questions: 1. What are the salient features of information security as a specialty? 2. How has the information security specialty emerged and evolved from…
Higher Education Research as a Field of Study in South Korea: Inward but Starting to Look Outward
ERIC Educational Resources Information Center
Jung, Jisun
2015-01-01
This study aims to explore the development of higher education research in South Korea based on historical and scientometric perspectives. After the evolution of the country's higher education research community is presented, articles focusing on higher education from 1995 to 2012 are analysed. In total, 145 articles in international journals and…
ERIC Educational Resources Information Center
Maricato, João de Melo; Vilan Filho, Jayme Leiro
2018-01-01
Introduction: Altmetrics is an area under construction, with a potential to study the impacts of academic products from social media data. It is believed that altmetrics can capture social and academic impacts, going beyond measures obtained using bibliometric and scientometric indicators. This research aimed to analyse aspects, characteristics…
Analysis and Reflections on the Third Learning Analytics and Knowledge Conference (LAK 2013)
ERIC Educational Resources Information Center
Ochoa, Xavier; Suthers, Dan; Verbert, Katrien; Duval, Erik
2014-01-01
Analyzing a conference, especially one as young and focused as LAK, provides the opportunity to observe the structure and contributions of the scientific community around it. This work will perform a Scientometric analysis, coupled with a more in-depth manual content analysis, to extract this insight from the proceedings and program of LAK 2013.…
SciELO, Scientific Electronic Library Online, a Database of Open Access Journals
ERIC Educational Resources Information Center
Meneghini, Rogerio
2013-01-01
This essay discusses SciELO, a scientific journal database operating in 14 countries. It covers over 1000 journals providing open access to full text and table sets of scientometrics data. In Brazil it is responsible for a collection of nearly 300 journals, selected along 15 years as the best Brazilian periodicals in natural and social sciences.…
The study of aquatic macrophytes in Neotropics: a scientometrical view of the main trends and gaps.
Padial, A A; Bini, L M; Thomaz, S M
2008-11-01
Aquatic macrophytes comprises a diverse group of organisms including angiosperms, ferns, mosses, liverworts and some macroalgae that occur in seasonally or permanently wet environments. Among other implications, aquatic macrophytes are highly productive and with an important structuring role on aquatic environments. Ecological studies involving aquatic plants substantially increased in the last years. However, a precise view of researches devoted to aquatic macrophytes in Neotropics is necessary to reach a reliable evaluation of the scientific production. In the current study, we performed a scientometrics analysis of the scientific production devoted to Neotropical macrophytes in an attempt to find the main trends and gaps of researches concerning this group. The publication devoted to macrophytes in Neotropics increased conspicuously in the last two decades. Brazil, Argentina, Mexico and Chile were the most productive among Neotropical countries. Our analyses showed that the studies dealt mostly with the influences of aquatic macrophytes on organisms and abiotic features. Studies with a predictive approach or aiming to test ecological hypothesis are scarce. In addition, researches aiming to describe unknown species are still necessary. This is essential to support conservation efforts and to subsidize further investigations testing ecological hypotheses.
Fetisov, V A; Gusarov, A A; Khabova, Z S; Baibarza, N V; Rudenko, I A
2015-01-01
The objective of the present work was the analysis of the scientometrical characteristics of the publication activity of the authors of the articles concerning the problems of investigations into thanatogenesis and the causes of death that were submitted for the publication in the journal "Sudebno-meditsinskaya ekspertiza" during the period from 2000 to 2014. The analysis was aimed at detecting the priority fields of research of interest not only for domestic but especially for foreign specialists. The study has revealed the most popular Russian-language and foreign journals that were most frequently cited by the authors of "Sudebno-meditsinskaya ekspertiza". It was shown that the largest number of publications were submitted by the research groups affiliated with the departments and other subdivisions of the practical expert institutions of Moscow, Sankt-Petersburg, Astrakhan, Krasnodar, Baku, and Krasnoyarsk. It is concluded that the further analysis and assessment of the research activities of specialist in forensic medical expertise with the use of scientometrical methods constitute the indispensable condition for the development and improvement of the quality of forensic medical expertise in the Russian Federation.
A Bibliometric Profile of Disaster Medicine Research from 2008 to 2017: A Scientometric Analysis.
Zhou, Liang; Zhang, Ping; Zhang, Zhigang; Fan, Lidong; Tang, Shuo; Hu, Kunpeng; Xiao, Nan; Li, Shuguang
2018-05-02
ABSTRACTThis study analyzed and assessed publication trends in articles on "disaster medicine," using scientometric analysis. Data were obtained from the Web of Science Core Collection (WoSCC) of Thomson Reuters on March 27, 2017. A total of 564 publications on disaster medicine were identified. There was a mild increase in the number of articles on disaster medicine from 2008 (n=55) to 2016 (n=83). Disaster Medicine and Public Health Preparedness published the most articles, the majority of articles were published in the United States, and the leading institute was Tohoku University. F. Della Corte, M. D. Christian, and P. L. Ingrassia were the top authors on the topic, and the field of public health generated the most publications. Terms analysis indicated that emergency medicine, public health, disaster preparedness, natural disasters, medicine, and management were the research hotspots, whereas Hurricane Katrina, mechanical ventilation, occupational medicine, intensive care, and European journals represented the frontiers of disaster medicine research. Overall, our analysis revealed that disaster medicine studies are closely related to other medical fields and provides researchers and policy-makers in this area with new insight into the hotspots and dynamic directions. (Disaster Med Public Health Preparedness. 2018;page 1 of 8).
Benditz, A; Drescher, J; Greimel, F; Zeman, F; Grifka, J; Meißner, W; Völlner, F
2016-12-05
Perioperative pain reduction, particularly during the first two days, is highly important for patients after total knee arthroplasty (TKA). Problems are not only caused by medical issues but by organization and hospital structure. The present study shows how the quality of pain management can be increased by implementing a standardized pain concept and simple, consistent benchmarking. All patients included into the study had undergone total knee arthroplasty. Outcome parameters were analyzed by means of a questionnaire on the first postoperative day. A multidisciplinary team implemented a regular procedure of data analyzes and external benchmarking by participating in a nationwide quality improvement project. At the beginning of the study, our hospital ranked 16 th in terms of activity-related pain and 9 th in patient satisfaction among 47 anonymized hospitals participating in the benchmarking project. At the end of the study, we had improved to 1 st activity-related pain and to 2 nd in patient satisfaction. Although benchmarking started and finished with the same standardized pain management concept, results were initially pure. Beside pharmacological treatment, interdisciplinary teamwork and benchmarking with direct feedback mechanisms are also very important for decreasing postoperative pain and for increasing patient satisfaction after TKA.
Benditz, A.; Drescher, J.; Greimel, F.; Zeman, F.; Grifka, J.; Meißner, W.; Völlner, F.
2016-01-01
Perioperative pain reduction, particularly during the first two days, is highly important for patients after total knee arthroplasty (TKA). Problems are not only caused by medical issues but by organization and hospital structure. The present study shows how the quality of pain management can be increased by implementing a standardized pain concept and simple, consistent benchmarking. All patients included into the study had undergone total knee arthroplasty. Outcome parameters were analyzed by means of a questionnaire on the first postoperative day. A multidisciplinary team implemented a regular procedure of data analyzes and external benchmarking by participating in a nationwide quality improvement project. At the beginning of the study, our hospital ranked 16th in terms of activity-related pain and 9th in patient satisfaction among 47 anonymized hospitals participating in the benchmarking project. At the end of the study, we had improved to 1st activity-related pain and to 2nd in patient satisfaction. Although benchmarking started and finished with the same standardized pain management concept, results were initially pure. Beside pharmacological treatment, interdisciplinary teamwork and benchmarking with direct feedback mechanisms are also very important for decreasing postoperative pain and for increasing patient satisfaction after TKA. PMID:27917911
NASA Technical Reports Server (NTRS)
Krueger, Ronald
2012-01-01
The development of benchmark examples for quasi-static delamination propagation prediction is presented and demonstrated for a commercial code. The examples are based on finite element models of the Mixed-Mode Bending (MMB) specimen. The examples are independent of the analysis software used and allow the assessment of the automated delamination propagation prediction capability in commercial finite element codes based on the virtual crack closure technique (VCCT). First, quasi-static benchmark examples were created for the specimen. Second, starting from an initially straight front, the delamination was allowed to propagate under quasi-static loading. Third, the load-displacement relationship from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. Good agreement between the results obtained from the automated propagation analysis and the benchmark results could be achieved by selecting input parameters that had previously been determined during analyses of mode I Double Cantilever Beam and mode II End Notched Flexure specimens. The benchmarking procedure proved valuable by highlighting the issues associated with choosing the input parameters of the particular implementation. Overall the results are encouraging, but further assessment for mixed-mode delamination fatigue onset and growth is required.
An automated protocol for performance benchmarking a widefield fluorescence microscope.
Halter, Michael; Bier, Elianna; DeRose, Paul C; Cooksey, Gregory A; Choquette, Steven J; Plant, Anne L; Elliott, John T
2014-11-01
Widefield fluorescence microscopy is a highly used tool for visually assessing biological samples and for quantifying cell responses. Despite its widespread use in high content analysis and other imaging applications, few published methods exist for evaluating and benchmarking the analytical performance of a microscope. Easy-to-use benchmarking methods would facilitate the use of fluorescence imaging as a quantitative analytical tool in research applications, and would aid the determination of instrumental method validation for commercial product development applications. We describe and evaluate an automated method to characterize a fluorescence imaging system's performance by benchmarking the detection threshold, saturation, and linear dynamic range to a reference material. The benchmarking procedure is demonstrated using two different materials as the reference material, uranyl-ion-doped glass and Schott 475 GG filter glass. Both are suitable candidate reference materials that are homogeneously fluorescent and highly photostable, and the Schott 475 GG filter glass is currently commercially available. In addition to benchmarking the analytical performance, we also demonstrate that the reference materials provide for accurate day to day intensity calibration. Published 2014 Wiley Periodicals Inc. Published 2014 Wiley Periodicals Inc. This article is a US government work and, as such, is in the public domain in the United States of America.
Rotavirus - Global research density equalizing mapping and gender analysis.
Köster, Corinna; Klingelhöfer, Doris; Groneberg, David A; Schwarzer, Mario
2016-01-02
Rotaviruses are the leading reason for dehydration and severe diarrheal disease and in infants and young children worldwide. An increasing number of related publications cause a crucial challenge to determine the relevant scientific output. Therefore, scientometric analyses are helpful to evaluate quantity as well as quality of the worldwide research activities on Rotavirus. Up to now, no in-depth global scientometric analysis relating to Rotavirus publications has been carried out. This study used scientometric tools and the method of density equalizing mapping to visualize the differences of the worldwide research effort referring to Rotavirus. The aim of the study was to compare scientific output geographically and over time by using an in-depth data analysis and New quality and quantity indices in science (NewQIS) tools. Furthermore, a gender analysis was part of the data interpretation. We retrieved all Rotavirus-related articles, which were published on "Rotavirus" during the time period from 1900 to 2013, from the Web of Science by a defined search term. These items were analyzed regarding quantitative and qualitative aspects, and visualized with the help of bibliometric methods and the technique of density equalizing mapping to show the differences of the worldwide research efforts. This work aimed to extend the current NewQIS platform. The 5906 Rotavirus associated articles were published in 138 countries from 1900 to 2013. The USA authored 2037 articles that equaled 34.5% of all published items followed by Japan with 576 articles and the United Kingdom - as the most productive representative of the European countries - with 495 articles. Furthermore, the USA established the most cooperations with other countries and was found to be in the center of an international collaborative network. We performed a gender analysis of authors per country (threshold was set at a publishing output of more than 100 articles by more than 50 authors whose names could be identified in more than 50% of cases) showed a domination of female scientists in Brazil, while in all other countries, male scientists predominate. Relating the number of publications to the population of a country (Q1) and compared to the GPD (Q2), we found that European and African countries as well as Australia and New Zealand - not the USA - were among the top ranked nations. Regarding rotavirus-related scientific output, the USA was the overall leading nation when qualitative and qualitative aspects were taken into account. In contrast to these classical scientometric variables, indices such as Q1 and Q2 enable comparability between countries with unequal conditions and scientific infrastructures helping to differentiate publishing quality and quantity in a more relevant way. Also, it was deduced that counties with a high rotavirus-associated child mortality, like the Democratic Republic of Congo, should be integrated into the collaborative efforts more intensively. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Stern, Luli
2002-11-01
Assessment influences every level of the education system and is one of the most crucial catalysts for reform in science curriculum and instruction. Teachers, administrators, and others who choose, assemble, or develop assessments face the difficulty of judging whether tasks are truly aligned with national or state standards and whether they are effective in revealing what students actually know. Project 2061 of the American Association for the Advancement of Science has developed and field-tested a procedure for analyzing curriculum materials, including their assessments, in terms of how well they are likely to contribute to the attainment of benchmarks and standards. With respect to assessment in curriculum materials, this procedure evaluates whether this assessment has the potential to reveal whether students have attained specific ideas in benchmarks and standards and whether information gained from students' responses can be used to inform subsequent instruction. Using this procedure, Project 2061 had produced a database of analytical reports on nine widely used science middle school curriculum materials. The analysis of assessments included in these materials shows that whereas currently available materials devote significant sections in their instruction to ideas included in national standards documents, students are typically not assessed on these ideas. The analysis results described in the report point to strengths and limitations of these widely used assessments and identify a range of good and poor assessment tasks that can shed light on important characteristics of good assessment.
ERIC Educational Resources Information Center
Ardanuy, Jordi; Urbano, Cristobal; Quintana, Lluis
2009-01-01
Introduction: This paper studies the situation of research on Catalan literature between 1976 and 2003 by carrying out a bibliometric and social network analysis of PhD theses defended in Spain. It has a dual aim: to present interesting results for the discipline and to demonstrate the methodological efficacy of scientometric tools in the…
ERIC Educational Resources Information Center
Xian, Hanjun; Madhavan, Krishna
2013-01-01
Over the past forty years, Howard Barrows' contributions to PBL research have influenced and guided educational research and practice in a diversity of domains. It is necessary to make visible to all PBL scholars what has been accomplished, what is perceived as significant, and what is the scope of applicability for Barrows' groundbreaking…
ERIC Educational Resources Information Center
Bornmann, Lutz
2017-01-01
Impact of science is one of the most important topics in scientometrics. Recent developments show a fundamental change in impact measurements from impact on science to impact on society. Since impact measurement is currently in a state of far reaching changes, this paper describes recent developments and facing problems in this area. For that, the…
Seminal nanotechnology literature: a review.
Kostoff, Ronald N; Koytcheff, Raymond G; Lau, Clifford G Y
2009-11-01
This paper uses complementary text mining techniques to identify and retrieve the high impact (seminal) nanotechnology literature over a span of time. Following a brief scientometric analysis of the seminal articles retrieved, these seminal articles are then used as a basis for a comprehensive literature survey of nanoscience and nanotechnology. The paper ends with a global analysis of the relation of seminal nanotechnology document production to total nanotechnology document production.
Practicing Surgeons Lead in Quality Care, Safety, and Cost Control
Shively, Eugene H.; Heine, Michael J.; Schell, Robert H.; Sharpe, J Neal; Garrison, R Neal; Vallance, Steven R.; DeSimone, Kenneth J.S.; Polk, Hiram C.
2004-01-01
Objective: To report the experiences of 66 surgical specialists from 15 different hospitals who performed 43 CPT-based procedures more than 16,000 times. Summary Background Data: Surgeons are under increasing pressure to demonstrate patient safety data as quantitated by objective and subjective outcomes that meet or exceed the standards of benchmark institutions or databases. Methods: Data from 66 surgical specialists on 43 CPT-based procedures were accessioned over a 4-year period. The hospitals vary from a small 30-bed hospital to large teaching hospitals. All reported deaths and complications were verified from hospital and office records and compared with benchmarks. Results: Over a 4-year inclusive period (1999–2002), 16,028 elective operations were accessioned. There was a total 1.4% complication rate and 0.05% death rate. A system has been developed for tracking outcomes. A wide range of improvements have been identified. These include the following: 1) improved classification of indications for systemic prophylactic antibiotic use and reduction in the variety of drugs used, 2) shortened length of stay for standard procedures in different surgical specialties, 3) adherence to strict indicators for selected operative procedures, 4) less use of costly diagnostic procedures, 5) decreased use of expensive home health services, 6) decreased use of very expensive drugs, 7) identification of the unnecessary expense of disposable laparoscopic devices, 8) development of a method to compare a one-surgeon hospital with his peers, and 9) development of unique protocols for interaction of anesthesia and surgery. The system also provides a very good basis for confirmation of patient safety and improvement therein. Conclusions: Since 1998, Quality Surgical Solutions, PLLC, has developed simple physician-authored protocols for delivering high-quality and cost-effective surgery that measure up to benchmark institutions. We have discovered wide areas for improvements in surgery by adherence to simple protocols, minimizing death and complications and clarifying cost issues. PMID:15166954
Batooli, Zahra; Ravandi, Somaye Nadi; Bidgoli, Mohammad Sabahi
2016-01-01
Introduction It is essential to evaluate the impact of scientific publications through citation analysis in citation indexes. In addition, scientometric measures of social media also should be assessed. These measures include how many times the publications were read, viewed, and downloaded. The present study aimed to assess the scientific output of scholars at Kashan University of Medical Sciences by the end of March 2014 based on scientometric measures of Scopus, ResearchGate, and Mendeley. Methods A survey method was used to study the articles published in Scopus journals by scholars at Kashan University of Medical Sciences by the end of March 2014. The required data were collected from Scopus, ResearchGate, and Mendeley. The data were analyzed with descriptive statistics. Also, the Spearman correlation was used between the number of views of articles in ResearchGate with citation number of the articles in Scopus and reading frequency of the articles in Mendeley with citation number in Scopus were examined using the Spearman correlation in SPSS 16. Results Five-hundred and thirty-three articles were indexed in the Scopus Citation Database by the end of March 2014. Collectively, those articles were cited 1,315 times. The articles were covered by ResearchGate (74%) more than Mendeley (44%). In addition, 98% of the articles indexed in ResearchGate and 92% of the articles indexed in Mendeley were viewed at least once. The results showed that there was a positive correlation between the number of views of the articles in ResearchGate and Mendeley and the number of citations of the articles in Scopus. Conclusion Coverage and the number of visitors were higher in ResearchGate than in Mendeley. The increase in the number of views of articles in ResearchGate and Mendeley also increased the number of citations of the papers. Social networks, such as ResearchGate and Mendeley, also can be used as tools for the evaluation of academics and scholars based on the scientific research they have conducted. PMID:27054017
2018-01-01
This paper analyzes the patterns of health biotechnology publications in six Latin American countries from 2001 to 2015. The countries studied were Argentina, Brazil, Chile, Colombia, Cuba and Mexico. Before our study, there were no data available on HBT development in half of the Latin-American countries we studied, i.e., Argentina, Colombia and Chile. To include these countries in a scientometric analysis of HBT provides fuller coverage of HBT development in Latin America. The scientometric study used the Web of Science database to identify health biotechnology publications. The total amount of health biotechnology production in the world during the period studied was about 400,000 papers. A total of 1.2% of these papers, were authored by the six Latin American countries in this study. The results show a significant growth in health biotechnology publications in Latin America despite some of the countries having social and political instability, fluctuations in their gross domestic expenditure in research and development or a trade embargo that limits opportunities for scientific development. The growth in the field of some of the Latin American countries studied was larger than the growth of most industrialized nations. Still, the visibility of the Latin American research (measured in the number of citations) did not reach the world average, with the exception of Colombia. The main producers of health biotechnology papers in Latin America were universities, except in Cuba were governmental institutions were the most frequent producers. The countries studied were active in international research collaboration with Colombia being the most active (64% of papers co-authored internationally), whereas Brazil was the least active (35% of papers). Still, the domestic collaboration was even more prevalent, with Chile being the most active in such collaboration (85% of papers co-authored domestically) and Argentina the least active (49% of papers). We conclude that the Latin American countries studied are increasing their health biotechnology publishing. This strategy could contribute to the development of innovations that may solve local health problems in the region. PMID:29415003
Batooli, Zahra; Ravandi, Somaye Nadi; Bidgoli, Mohammad Sabahi
2016-02-01
It is essential to evaluate the impact of scientific publications through citation analysis in citation indexes. In addition, scientometric measures of social media also should be assessed. These measures include how many times the publications were read, viewed, and downloaded. The present study aimed to assess the scientific output of scholars at Kashan University of Medical Sciences by the end of March 2014 based on scientometric measures of Scopus, ResearchGate, and Mendeley. A survey method was used to study the articles published in Scopus journals by scholars at Kashan University of Medical Sciences by the end of March 2014. The required data were collected from Scopus, ResearchGate, and Mendeley. The data were analyzed with descriptive statistics. Also, the Spearman correlation was used between the number of views of articles in ResearchGate with citation number of the articles in Scopus and reading frequency of the articles in Mendeley with citation number in Scopus were examined using the Spearman correlation in SPSS 16. Five-hundred and thirty-three articles were indexed in the Scopus Citation Database by the end of March 2014. Collectively, those articles were cited 1,315 times. The articles were covered by ResearchGate (74%) more than Mendeley (44%). In addition, 98% of the articles indexed in ResearchGate and 92% of the articles indexed in Mendeley were viewed at least once. The results showed that there was a positive correlation between the number of views of the articles in ResearchGate and Mendeley and the number of citations of the articles in Scopus. Coverage and the number of visitors were higher in ResearchGate than in Mendeley. The increase in the number of views of articles in ResearchGate and Mendeley also increased the number of citations of the papers. Social networks, such as ResearchGate and Mendeley, also can be used as tools for the evaluation of academics and scholars based on the scientific research they have conducted.
Vlassakov, Kamen V; Kissin, Igor
2015-01-01
The aim of this study was to assess progress in the field of anesthesia monitoring over the past 40 years using scientometric analysis. The following scientometric indexes were used: popularity indexes (general and specific), representing the proportion of articles on either a topic relative to all articles in the field of anesthetics (general popularity index, GPI) or the subfield of anesthesia monitoring (specific popularity index, SPI); index of change (IC), representing the degree of growth in publications on a topic from one period to the next; and index of expectations (IE), representing the ratio of the number of articles on a topic in the top 20 journals relative to the number of articles in all (>5,000) biomedical journals covered by PubMed. Publications on 33 anesthesia-monitoring topics were assessed. Our analysis showed that over the past 40 years, the rate of rise in the number of articles on anesthesia monitoring was exponential, with an increase of more than eleven-fold, from 296 articles over the 5-year period 1974–1978 to 3,394 articles for 2009–2013. This rise profoundly exceeded the rate of rise of the number of articles on general anesthetics. The difference was especially evident with the comparison of the related GPIs: stable growth of the GPI for anesthesia monitoring vs constant decline in the GPI for general anesthetics. By the 2009–2013 period, among specific monitoring topics introduced after 1980, the SPI index had a meaningful magnitude (≥1.5) in 9 of 24 topics: Bispectral Index (7.8), Transesophageal Echocardiography (4.2), Electromyography (2.8), Pulse Oximetry (2.4), Entropy (2.3), Train-of-four (2.3), Capnography (1.9), Pulse Contour (1.9), and Electrical Nerve Stimulation for neuromuscular monitoring (1.6). Only one of these topics (Pulse Contour) demonstrated (in 2009–2013) high values for both IC and IE indexes (76 and 16.9, respectively), indicating significant recent progress. We suggest that rapid growth in the field of anesthetic monitoring was one of the most important developments to compensate for the intrinsically low margins of safety of anesthetic agents. PMID:26005336
León-de la O, Dante Israel; Thorsteinsdóttir, Halla; Calderón-Salinas, José Víctor
2018-01-01
This paper analyzes the patterns of health biotechnology publications in six Latin American countries from 2001 to 2015. The countries studied were Argentina, Brazil, Chile, Colombia, Cuba and Mexico. Before our study, there were no data available on HBT development in half of the Latin-American countries we studied, i.e., Argentina, Colombia and Chile. To include these countries in a scientometric analysis of HBT provides fuller coverage of HBT development in Latin America. The scientometric study used the Web of Science database to identify health biotechnology publications. The total amount of health biotechnology production in the world during the period studied was about 400,000 papers. A total of 1.2% of these papers, were authored by the six Latin American countries in this study. The results show a significant growth in health biotechnology publications in Latin America despite some of the countries having social and political instability, fluctuations in their gross domestic expenditure in research and development or a trade embargo that limits opportunities for scientific development. The growth in the field of some of the Latin American countries studied was larger than the growth of most industrialized nations. Still, the visibility of the Latin American research (measured in the number of citations) did not reach the world average, with the exception of Colombia. The main producers of health biotechnology papers in Latin America were universities, except in Cuba were governmental institutions were the most frequent producers. The countries studied were active in international research collaboration with Colombia being the most active (64% of papers co-authored internationally), whereas Brazil was the least active (35% of papers). Still, the domestic collaboration was even more prevalent, with Chile being the most active in such collaboration (85% of papers co-authored domestically) and Argentina the least active (49% of papers). We conclude that the Latin American countries studied are increasing their health biotechnology publishing. This strategy could contribute to the development of innovations that may solve local health problems in the region.
Performance benchmark of LHCb code on state-of-the-art x86 architectures
NASA Astrophysics Data System (ADS)
Campora Perez, D. H.; Neufeld, N.; Schwemmer, R.
2015-12-01
For Run 2 of the LHC, LHCb is replacing a significant part of its event filter farm with new compute nodes. For the evaluation of the best performing solution, we have developed a method to convert our high level trigger application into a stand-alone, bootable benchmark image. With additional instrumentation we turned it into a self-optimising benchmark which explores techniques such as late forking, NUMA balancing and optimal number of threads, i.e. it automatically optimises box-level performance. We have run this procedure on a wide range of Haswell-E CPUs and numerous other architectures from both Intel and AMD, including also the latest Intel micro-blade servers. We present results in terms of performance, power consumption, overheads and relative cost.
Benchmarking Brain-Computer Interfaces Outside the Laboratory: The Cybathlon 2016
Novak, Domen; Sigrist, Roland; Gerig, Nicolas J.; Wyss, Dario; Bauer, René; Götz, Ulrich; Riener, Robert
2018-01-01
This paper presents a new approach to benchmarking brain-computer interfaces (BCIs) outside the lab. A computer game was created that mimics a real-world application of assistive BCIs, with the main outcome metric being the time needed to complete the game. This approach was used at the Cybathlon 2016, a competition for people with disabilities who use assistive technology to achieve tasks. The paper summarizes the technical challenges of BCIs, describes the design of the benchmarking game, then describes the rules for acceptable hardware, software and inclusion of human pilots in the BCI competition at the Cybathlon. The 11 participating teams, their approaches, and their results at the Cybathlon are presented. Though the benchmarking procedure has some limitations (for instance, we were unable to identify any factors that clearly contribute to BCI performance), it can be successfully used to analyze BCI performance in realistic, less structured conditions. In the future, the parameters of the benchmarking game could be modified to better mimic different applications (e.g., the need to use some commands more frequently than others). Furthermore, the Cybathlon has the potential to showcase such devices to the general public. PMID:29375294
NASA Technical Reports Server (NTRS)
Padovan, J.; Adams, M.; Fertis, J.; Zeid, I.; Lam, P.
1982-01-01
Finite element codes are used in modelling rotor-bearing-stator structure common to the turbine industry. Engine dynamic simulation is used by developing strategies which enable the use of available finite element codes. benchmarking the elements developed are benchmarked by incorporation into a general purpose code (ADINA); the numerical characteristics of finite element type rotor-bearing-stator simulations are evaluated through the use of various types of explicit/implicit numerical integration operators. Improving the overall numerical efficiency of the procedure is improved.
Di Tommaso, Paolo; Orobitg, Miquel; Guirado, Fernando; Cores, Fernado; Espinosa, Toni; Notredame, Cedric
2010-08-01
We present the first parallel implementation of the T-Coffee consistency-based multiple aligner. We benchmark it on the Amazon Elastic Cloud (EC2) and show that the parallelization procedure is reasonably effective. We also conclude that for a web server with moderate usage (10K hits/month) the cloud provides a cost-effective alternative to in-house deployment. T-Coffee is a freeware open source package available from http://www.tcoffee.org/homepage.html
Sayers, Adrian; Crowther, Michael J; Judge, Andrew; Whitehouse, Michael R; Blom, Ashley W
2017-08-28
The use of benchmarks to assess the performance of implants such as those used in arthroplasty surgery is a widespread practice. It provides surgeons, patients and regulatory authorities with the reassurance that implants used are safe and effective. However, it is not currently clear how or how many implants should be statistically compared with a benchmark to assess whether or not that implant is superior, equivalent, non-inferior or inferior to the performance benchmark of interest.We aim to describe the methods and sample size required to conduct a one-sample non-inferiority study of a medical device for the purposes of benchmarking. Simulation study. Simulation study of a national register of medical devices. We simulated data, with and without a non-informative competing risk, to represent an arthroplasty population and describe three methods of analysis (z-test, 1-Kaplan-Meier and competing risks) commonly used in surgical research. We evaluate the performance of each method using power, bias, root-mean-square error, coverage and CI width. 1-Kaplan-Meier provides an unbiased estimate of implant net failure, which can be used to assess if a surgical device is non-inferior to an external benchmark. Small non-inferiority margins require significantly more individuals to be at risk compared with current benchmarking standards. A non-inferiority testing paradigm provides a useful framework for determining if an implant meets the required performance defined by an external benchmark. Current contemporary benchmarking standards have limited power to detect non-inferiority, and substantially larger samples sizes, in excess of 3200 procedures, are required to achieve a power greater than 60%. It is clear when benchmarking implant performance, net failure estimated using 1-KM is preferential to crude failure estimated by competing risk models. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
NASA Astrophysics Data System (ADS)
Taylor, Mark S.; Ivanic, Sandra A.; Wood, Geoffrey P. F.; Easton, Christopher J.; Bacskay, George B.; Radom, Leo
2009-07-01
A high-level quantum chemistry investigation has been carried out for the abstraction by chlorine atom of hydrogen from methane and five monosubstituted methanes, chosen to reflect the chemical functionalities contained in amino acids and peptides. A modified W1' procedure is used to calculate benchmark barriers and reaction energies for the six reactions. The reactions demonstrate a broad range of barrier heights and reaction energies, which can be rationalized using curve-crossing and molecular orbital theory models. In addition, the performance of a range of computationally less demanding electronic structure methods is assessed for calculating the energy profiles for the six reactions. It is found that the G3X(MP2)-RAD procedure compares best with the W1' benchmark, demonstrating a mean absolute deviation (MAD) from W1' of 2.1 kJ mol-1. The more economical RMP2/G3XLarge and UB2-PLYP/G3XLarge methods are also shown to perform well, with MADs from W1' of 2.9 and 3.0 kJ mol-1, respectively.
Schumann, Marcel; Armen, Roger S
2013-05-30
Molecular docking of small-molecules is an important procedure for computer-aided drug design. Modeling receptor side chain flexibility is often important or even crucial, as it allows the receptor to adopt new conformations as induced by ligand binding. However, the accurate and efficient incorporation of receptor side chain flexibility has proven to be a challenge due to the huge computational complexity required to adequately address this problem. Here we describe a new docking approach with a very fast, graph-based optimization algorithm for assignment of the near-optimal set of residue rotamers. We extensively validate our approach using the 40 DUD target benchmarks commonly used to assess virtual screening performance and demonstrate a large improvement using the developed side chain optimization over rigid receptor docking (average ROC AUC of 0.693 vs. 0.623). Compared to numerous benchmarks, the overall performance is better than nearly all other commonly used procedures. Furthermore, we provide a detailed analysis of the level of receptor flexibility observed in docking results for different classes of residues and elucidate potential avenues for further improvement. Copyright © 2013 Wiley Periodicals, Inc.
Cuschieri, Joseph; Johnson, Jeffrey L.; Sperry, Jason; West, Michael A.; Moore, Ernest E.; Minei, Joseph P.; Bankey, Paul E.; Nathens, Avery B.; Cuenca, Alex G.; Efron, Philip A.; Hennessy, Laura; Xiao, Wenzhong; Mindrinos, Michael N.; McDonald-Smith, Grace P.; Mason, Philip H.; Billiar, Timothy R.; Schoenfeld, David A.; Warren, H. Shaw; Cobb, J. Perren; Moldawer, Lyle L.; Davis, Ronald W.; Maier, Ronald V.; Tompkins, Ronald G.
2012-01-01
Objective To determine and compare outcomes with accepted benchmarks in trauma care at seven academic Level I trauma centers in which patients were treated based on a series of standard operating procedures (SOPs). Background Injury remains the leading cause of death for those under 45 years of age. We describe the baseline patient characteristics and well-defined outcomes of persons hospitalized in the United States for severe blunt trauma. Methods We followed 1,637 trauma patients from 2003–2009 up to 28 hospital days using SOPs developed at the onset of the study. An extensive database on patient and injury characteristics, clinical treatment, and outcomes was created. These data were compared with existing trauma benchmarks. Results The study patients were critically injured and in shock. SOP compliance improved 10–40% during the study period. Multiple organ failure and mortality rates were 34.8% and 16.7% respectively. Time to recovery, defined as the time until the patient was free of organ failure for at least two consecutive days, was developed as a new outcome measure. There was a reduction in mortality rate in the cohort during the study that cannot be explained by changes in the patient population. Conclusions This study provides the current benchmark and the overall positive effect of implementing SOPs for severely injured patients. Over the course of the study, there were improvements in morbidity and mortality and increasing compliance with SOPs. Mortality was surprisingly low, given the degree of injury, and improved over the duration of the study, which correlated with improved SOP compliance. PMID:22470077
Cuschieri, Joseph; Johnson, Jeffrey L; Sperry, Jason; West, Michael A; Moore, Ernest E; Minei, Joseph P; Bankey, Paul E; Nathens, Avery B; Cuenca, Alex G; Efron, Philip A; Hennessy, Laura; Xiao, Wenzhong; Mindrinos, Michael N; McDonald-Smith, Grace P; Mason, Philip H; Billiar, Timothy R; Schoenfeld, David A; Warren, H Shaw; Cobb, J Perren; Moldawer, Lyle L; Davis, Ronald W; Maier, Ronald V; Tompkins, Ronald G
2012-05-01
To determine and compare outcomes with accepted benchmarks in trauma care at 7 academic level I trauma centers in which patients were treated on the basis of a series of standard operating procedures (SOPs). Injury remains the leading cause of death for those younger than 45 years. This study describes the baseline patient characteristics and well-defined outcomes of persons hospitalized in the United States for severe blunt trauma. We followed 1637 trauma patients from 2003 to 2009 up to 28 hospital days using SOPs developed at the onset of the study. An extensive database on patient and injury characteristics, clinical treatment, and outcomes was created. These data were compared with existing trauma benchmarks. The study patients were critically injured and were in shock. SOP compliance improved 10% to 40% during the study period. Multiple organ failure and mortality rates were 34.8% and 16.7%, respectively. Time to recovery, defined as the time until the patient was free of organ failure for at least 2 consecutive days, was developed as a new outcome measure. There was a reduction in mortality rate in the cohort during the study that cannot be explained by changes in the patient population. This study provides the current benchmark and the overall positive effect of implementing SOPs for severely injured patients. Over the course of the study, there were improvements in morbidity and mortality rates and increasing compliance with SOPs. Mortality was surprisingly low, given the degree of injury, and improved over the duration of the study, which correlated with improved SOP compliance.
2012-11-02
Applied Actant-Network Theory: Toward the Automated Detection of Technoscientific Emergence from Full-Text Publications and Patents David C...Brock**, Olga Babko-Malaya*, James Pustejovsky***, Patrick Thomas****, *BAE Systems Advanced Information Technologies, ** David C. Brock Consulting... Wojick , D. 2008. Population modeling of the emergence and development of scientific fields. Scientometrics, 75(3):495–518. Cook, T. D. and
Ghojazadeh, Morteza; Naghavi-Behzad, Mohammad; Nasrolah-Zadeh, Raheleh; Bayat-Khajeh, Parvaneh; Piri, Reza; Mirnia, Keyvan; Azami-Aghdash, Saber
2014-01-01
Scientometrics is a useful method for management of financial and human resources and has been applied many times in medical sciences during recent years. The aim of this study was to investigate the status of science production by Iranian scientists in the gastric cancer field based on the Medline database. In this descriptive-cross sectional study Iranian science production concerning gastric cancer during 2000-2011 was investigated based on Medline. After two stages of searching, 121 articles were found, then we reviewed publication date, authors names, journal title, impact factor (IF), and cooperation coefficient between researchers. SPSS.19 was used for statistical analysis. There was a significant increase in published articles about gastric cancer by Iranian researchers in Medline database during 2006-2011. Mean cooperation coefficient between researchers was 6.14±3.29 person per article. Articles of this field were published in 19 countries and 56 journals. Those basex in Thailand, England, and America had the most published Iranian articles. Tehran University of Medical Sciences and Mohammadreza Zali had the most outstanding role in publishing scientific articles. According to results of this study, improving cooperation of researchers in conducting research and scientometric studies about other fields may have an important role in increasing both quality and quantity of published studies.
Liu, Shuyan; Oakland, Thomas
2016-03-01
The objective of this current study is to identify the growth and development of scholarly literature that specifically references the term 'school psychology' in the Science Citation Index from 1907 through 2014. Documents from Web of Science were accessed and analyzed through the use of scientometric analyses, including HistCite and Pajek software, resulting in the identification of 4,806 scholars who contributed 3,260 articles in 311 journals. Whereas the database included journals from around the world, most articles were published by authors in the United States and in 20 journals, including the Journal of School Psychology, Psychology in the Schools, School Psychology Review, School Psychology International, and School Psychology Quarterly. Analyses of the database from the past century revealed that 20 of the most prolific scholars contributed 14% of all articles. Contributions from faculty and students at University of Minnesota-Twin Cities, University of Nebraska-Lincoln, University of South Carolina, University of Wisconsin-Madison, and University of Texas-Austin represented 10% of all articles including the term school psychology in the Science Citation Index. Relationships among some of the most highly cited articles are also described. Collectively, the series of analyses reported herein contribute to our understanding of scholarship in school psychology. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Scientometric Analysis and Mapping of Scientific Articles on Diabetic Retinopathy.
Ramin, Shahrokh; Gharebaghi, Reza; Heidary, Fatemeh
2015-01-01
Diabetic retinopathy (DR) is the major cause of blindness among the working-age population globally. No systematic research has been previously performed to analyze the research published on DR, despite the need for it. This study aimed to analyze the scientific production on DR to draw overall roadmap of future research strategic planning in this field. A bibliometric method was used to obtain a view on the scientific production about DR by the data extracted from the Institute for Scientific Information (ISI). Articles about DR published in 1993-2013 were analyzed to obtain a view of the topic's structure, history, and to document relationships. The trends in the most influential publications and authors were analyzed. Most highly cited articles addressed epidemiologic and translational research topics in this field. During the past 3 years, there has been a trend toward biomarker discovery and more molecular translational research. Areas such as gene therapy and micro-RNAs are also among the recent hot topics. Through analyzing the characteristics of papers and the trends in scientific production, we performed the first scientometric report on DR. Most influential articles have addressed epidemiology and translational research subjects in this field, which reflects that globally, the earlier diagnosis and treatment of this devastating disease still has the highest global priority.
Is autoimmunology a discipline of its own? A big data-based bibliometric and scientometric analyses.
Watad, Abdulla; Bragazzi, Nicola Luigi; Adawi, Mohammad; Amital, Howard; Kivity, Shaye; Mahroum, Naim; Blank, Miri; Shoenfeld, Yehuda
2017-06-01
Autoimmunology is a super-specialty of immunology specifically dealing with autoimmune disorders. To assess the extant literature concerning autoimmune disorders, bibliometric and scientometric analyses (namely, research topics/keywords co-occurrence, journal co-citation, citations, and scientific output trends - both crude and normalized, authors network, leading authors, countries, and organizations analysis) were carried out using open-source software, namely, VOSviewer and SciCurve. A corpus of 169,519 articles containing the keyword "autoimmunity" was utilized, selecting PubMed/MEDLINE as bibliographic thesaurus. Journals specifically devoted to autoimmune disorders were six and covered approximately 4.15% of the entire scientific production. Compared with all the corpus (from 1946 on), these specialized journals have been established relatively few decades ago. Top countries were the United States, Japan, Germany, United Kingdom, Italy, China, France, Canada, Australia, and Israel. Trending topics are represented by the role of microRNAs (miRNAs) in the ethiopathogenesis of autoimmune disorders, contributions of genetics and of epigenetic modifications, role of vitamins, management during pregnancy and the impact of gender. New subsets of immune cells have been extensively investigated, with a focus on interleukin production and release and on Th17 cells. Autoimmunology is emerging as a new discipline within immunology, with its own bibliometric properties, an identified scientific community and specifically devoted journals.
Brüggmann, Dörthe; Köster, Corinna; Klingelhöfer, Doris; Bauer, Jan; Ohlendorf, Daniela; Bundschuh, Matthias; Groneberg, David A
2017-01-01
Objective Worldwide, the respiratory syncytial virus (RSV) represents the predominant viral agent causing bronchiolitis and pneumonia in children. To conduct research and tackle existing healthcare disparities, RSV-related research activities around the globe need to be described. Hence, we assessed the associated scientific output (represented by research articles) by geographical, chronological and socioeconomic criteria and analysed the authors publishing in the field by gender. Also, the 15 most cited articles and the most prolific journals were identified for RSV research. Design Retrospective, descriptive study. Setting The NewQIS (New Quality and Quantity Indices in Science) platform was employed to identify RSV-related articles published in the Web of Science until 2013. We performed a numerical analysis of all articles, and examined citation-based aspects (eg, citation rates); results were visualised by density equalising mapping tools. Results We identified 4600 RSV-related articles. The USA led the field; US-American authors published 2139 articles (46.5%% of all identified articles), which have been cited 83 000 times. When output was related to socioeconomic benchmarks such as gross domestic product or Research and Development expenditures, Guinea-Bissau, The Gambia and Chile were ranked in leading positions. A total of 614 articles on RSV (13.34% of all articles) were attributed to scientific collaborations. These were primarily established between high-income countries. The gender analysis indicated that male scientists dominated in all countries except Brazil. Conclusions The majority of RSV-related research articles originated from high-income countries whereas developing nations showed only minimal publication productivity and were barely part of any collaborative networks. Hence, research capacity in these nations should be increased in order to assist in addressing inequities in resource allocation and the clinical burden of RSV in these countries. PMID:28751483
Turbofan forced mixer-nozzle internal flowfield. Volume 1: A benchmark experimental study
NASA Technical Reports Server (NTRS)
Paterson, R. W.
1982-01-01
An experimental investigation of the flow field within a model turbofan forced mixer nozzle is described. Velocity and thermodynamic state variable data for use in assessing the accuracy and assisting the further development of computational procedures for predicting the flow field within mixer nozzles are provided. Velocity and temperature data suggested that the nozzle mixing process was dominated by circulations (secondary flows) of a length scale on the order the lobe dimensions which were associated with strong radial velocities observed near the lobe exit plane. The 'benchmark' model mixer experiment conducted for code assessment purposes is discussed.
Scale/TSUNAMI Sensitivity Data for ICSBEP Evaluations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rearden, Bradley T; Reed, Davis Allan; Lefebvre, Robert A
2011-01-01
The Tools for Sensitivity and Uncertainty Analysis Methodology Implementation (TSUNAMI) software developed at Oak Ridge National Laboratory (ORNL) as part of the Scale code system provide unique methods for code validation, gap analysis, and experiment design. For TSUNAMI analysis, sensitivity data are generated for each application and each existing or proposed experiment used in the assessment. The validation of diverse sets of applications requires potentially thousands of data files to be maintained and organized by the user, and a growing number of these files are available through the International Handbook of Evaluated Criticality Safety Benchmark Experiments (IHECSBE) distributed through themore » International Criticality Safety Benchmark Evaluation Program (ICSBEP). To facilitate the use of the IHECSBE benchmarks in rigorous TSUNAMI validation and gap analysis techniques, ORNL generated SCALE/TSUNAMI sensitivity data files (SDFs) for several hundred benchmarks for distribution with the IHECSBE. For the 2010 edition of IHECSBE, the sensitivity data were generated using 238-group cross-section data based on ENDF/B-VII.0 for 494 benchmark experiments. Additionally, ORNL has developed a quality assurance procedure to guide the generation of Scale inputs and sensitivity data, as well as a graphical user interface to facilitate the use of sensitivity data in identifying experiments and applying them in validation studies.« less
Development of Benchmark Examples for Delamination Onset and Fatigue Growth Prediction
NASA Technical Reports Server (NTRS)
Krueger, Ronald
2011-01-01
An approach for assessing the delamination propagation and growth capabilities in commercial finite element codes was developed and demonstrated for the Virtual Crack Closure Technique (VCCT) implementations in ABAQUS. The Double Cantilever Beam (DCB) specimen was chosen as an example. First, benchmark results to assess delamination propagation capabilities under static loading were created using models simulating specimens with different delamination lengths. For each delamination length modeled, the load and displacement at the load point were monitored. The mixed-mode strain energy release rate components were calculated along the delamination front across the width of the specimen. A failure index was calculated by correlating the results with the mixed-mode failure criterion of the graphite/epoxy material. The calculated critical loads and critical displacements for delamination onset for each delamination length modeled were used as a benchmark. The load/displacement relationship computed during automatic propagation should closely match the benchmark case. Second, starting from an initially straight front, the delamination was allowed to propagate based on the algorithms implemented in the commercial finite element software. The load-displacement relationship obtained from the propagation analysis results and the benchmark results were compared. Good agreements could be achieved by selecting the appropriate input parameters, which were determined in an iterative procedure.
HEURISTIC OPTIMIZATION AND ALGORITHM TUNING APPLIED TO SORPTIVE BARRIER DESIGN
While heuristic optimization is applied in environmental applications, ad-hoc algorithm configuration is typical. We use a multi-layer sorptive barrier design problem as a benchmark for an algorithm-tuning procedure, as applied to three heuristics (genetic algorithms, simulated ...
Kaufmann-Kolle, Petra; Szecsenyi, Joachim; Broge, Björn; Haefeli, Walter Emil; Schneider, Antonius
2011-01-01
The purpose of this cluster-randomised controlled trial was to evaluate the efficacy of quality circles (QCs) working either with general data-based feedback or with an open benchmark within the field of asthma care and drug-drug interactions. Twelve QCs, involving 96 general practitioners from 85 practices, were randomised. Six QCs worked with traditional anonymous feedback and six with an open benchmark. Two QC meetings supported with feedback reports were held covering the topics "drug-drug interactions" and "asthma"; in both cases discussions were guided by a trained moderator. Outcome measures included health-related quality of life and patient satisfaction with treatment, asthma severity and number of potentially inappropriate drug combinations as well as the general practitioners' satisfaction in relation to the performance of the QC. A significant improvement in the treatment of asthma was observed in both trial arms. However, there was only a slight improvement regarding inappropriate drug combinations. There were no relevant differences between the group with open benchmark (B-QC) and traditional quality circles (T-QC). The physicians' satisfaction with the QC performance was significantly higher in the T-QCs. General practitioners seem to take a critical perspective about open benchmarking in quality circles. Caution should be used when implementing benchmarking in a quality circle as it did not improve healthcare when compared to the traditional procedure with anonymised comparisons. Copyright © 2011. Published by Elsevier GmbH.
ERIC Educational Resources Information Center
Mohammadi, Aeen; Asadzandi, Shadi; Malgard, Shiva
2017-01-01
Partnership is one of the mechanisms of scientific development, and scientific collaboration or co-authorship is considered a key element in the progress of science. This study is a survey with a scientometric approach focusing on the field of e-learning products over 10 years. In an Advanced Search of the Web of Science, the following search…
Benchmarking study of corporate research management and planning practices
NASA Astrophysics Data System (ADS)
McIrvine, Edward C.
1992-05-01
During 1983-84, Xerox Corporation was undergoing a change in corporate style through a process of training and altered behavior known as Leadership Through Quality. One tenet of Leadership Through Quality was benchmarking, a procedure whereby all units of the corporation were asked to compare their operation with the outside world. As a part of the first wave of benchmark studies, Xerox Corporate Research Group studied the processes of research management, technology transfer, and research planning in twelve American and Japanese companies. The approach taken was to separate `research yield' and `research productivity' (as defined by Richard Foster) and to seek information about how these companies sought to achieve high- quality results in these two parameters. The most significant findings include the influence of company culture, two different possible research missions (an innovation resource and an information resource), and the importance of systematic personal interaction between sources and targets of technology transfer.
Space network scheduling benchmark: A proof-of-concept process for technology transfer
NASA Technical Reports Server (NTRS)
Moe, Karen; Happell, Nadine; Hayden, B. J.; Barclay, Cathy
1993-01-01
This paper describes a detailed proof-of-concept activity to evaluate flexible scheduling technology as implemented in the Request Oriented Scheduling Engine (ROSE) and applied to Space Network (SN) scheduling. The criteria developed for an operational evaluation of a reusable scheduling system is addressed including a methodology to prove that the proposed system performs at least as well as the current system in function and performance. The improvement of the new technology must be demonstrated and evaluated against the cost of making changes. Finally, there is a need to show significant improvement in SN operational procedures. Successful completion of a proof-of-concept would eventually lead to an operational concept and implementation transition plan, which is outside the scope of this paper. However, a high-fidelity benchmark using actual SN scheduling requests has been designed to test the ROSE scheduling tool. The benchmark evaluation methodology, scheduling data, and preliminary results are described.
The future of simulation technologies for complex cardiovascular procedures.
Cates, Christopher U; Gallagher, Anthony G
2012-09-01
Changing work practices and the evolution of more complex interventions in cardiovascular medicine are forcing a paradigm shift in the way doctors are trained. Implantable cardioverter defibrillator (ICD), transcatheter aortic valve implantation (TAVI), carotid artery stenting (CAS), and acute stroke intervention procedures are forcing these changes at a faster pace than in other disciplines. As a consequence, cardiovascular medicine has had to develop a sophisticated understanding of precisely what is meant by 'training' and 'skill'. An evolving conclusion is that procedure training on a virtual reality (VR) simulator presents a viable current solution. These simulations should characterize the important performance characteristics of procedural skill that have metrics derived and defined from, and then benchmarked to experienced operators (i.e. level of proficiency). Simulation training is optimal with metric-based feedback, particularly formative trainee error assessments, proximate to their performance. In prospective, randomized studies, learners who trained to a benchmarked proficiency level on the simulator performed significantly better than learners who were traditionally trained. In addition, cardiovascular medicine now has available the most sophisticated virtual reality simulators in medicine and these have been used for the roll-out of interventions such as CAS in the USA and globally with cardiovascular society and industry partnered training programmes. The Food and Drug Administration has advocated the use of VR simulation as part of the approval of new devices and the American Board of Internal Medicine has adopted simulation as part of its maintenance of certification. Simulation is rapidly becoming a mainstay of cardiovascular education, training, certification, and the safe adoption of new technology. If cardiovascular medicine is to continue to lead in the adoption and integration of simulation, then, it must take a proactive position in the development of metric-based simulation curriculum, adoption of proficiency benchmarking definitions, and then resolve to commit resources so as to continue to lead this revolution in physician training.
Hand washing frequencies and procedures used in retail food services.
Strohbehn, Catherine; Sneed, Jeannie; Paez, Paola; Meyer, Janell
2008-08-01
Transmission of viruses, bacteria, and parasites to food by way of improperly washed hands is a major contributing factor in the spread of foodborne illnesses. Field observers have assessed compliance with hand washing regulations, yet few studies have included consideration of frequency and methods used by sectors of the food service industry or have included benchmarks for hand washing. Five 3-h observation periods of employee (n = 80) hand washing behaviors during menu production, service, and cleaning were conducted in 16 food service operations for a total of 240 h of direct observation. Four operations from each of four sectors of the retail food service industry participated in the study: assisted living for the elderly, childcare, restaurants, and schools. A validated observation form, based on 2005 Food Code guidelines, was used by two trained researchers. Researchers noted when hands should have been washed, when hands were washed, and how hands were washed. Overall compliance with Food Code recommendations for frequency during production, service, and cleaning phases ranged from 5% in restaurants to 33% in assisted living facilities. Procedural compliance rates also were low. Proposed benchmarks for the number of times hand washing should occur by each employee for each sector of food service during each phase of operation are seven times per hour for assisted living, nine times per hour for childcare, 29 times per hour for restaurants, and 11 times per hour for schools. These benchmarks are high, especially for restaurant employees. Implementation would mean lost productivity and potential for dermatitis; thus, active managerial control over work assignments is needed. These benchmarks can be used for training and to guide employee hand washing behaviors.
Benchmarking in pathology: development of an activity-based costing model.
Burnett, Leslie; Wilson, Roger; Pfeffer, Sally; Lowry, John
2012-12-01
Benchmarking in Pathology (BiP) allows pathology laboratories to determine the unit cost of all laboratory tests and procedures, and also provides organisational productivity indices allowing comparisons of performance with other BiP participants. We describe 14 years of progressive enhancement to a BiP program, including the implementation of 'avoidable costs' as the accounting basis for allocation of costs rather than previous approaches using 'total costs'. A hierarchical tree-structured activity-based costing model distributes 'avoidable costs' attributable to the pathology activities component of a pathology laboratory operation. The hierarchical tree model permits costs to be allocated across multiple laboratory sites and organisational structures. This has enabled benchmarking on a number of levels, including test profiles and non-testing related workload activities. The development of methods for dealing with variable cost inputs, allocation of indirect costs using imputation techniques, panels of tests, and blood-bank record keeping, have been successfully integrated into the costing model. A variety of laboratory management reports are produced, including the 'cost per test' of each pathology 'test' output. Benchmarking comparisons may be undertaken at any and all of the 'cost per test' and 'cost per Benchmarking Complexity Unit' level, 'discipline/department' (sub-specialty) level, or overall laboratory/site and organisational levels. We have completed development of a national BiP program. An activity-based costing methodology based on avoidable costs overcomes many problems of previous benchmarking studies based on total costs. The use of benchmarking complexity adjustment permits correction for varying test-mix and diagnostic complexity between laboratories. Use of iterative communication strategies with program participants can overcome many obstacles and lead to innovations.
Benchmarking a geostatistical procedure for the homogenisation of annual precipitation series
NASA Astrophysics Data System (ADS)
Caineta, Júlio; Ribeiro, Sara; Henriques, Roberto; Soares, Amílcar; Costa, Ana Cristina
2014-05-01
The European project COST Action ES0601, Advances in homogenisation methods of climate series: an integrated approach (HOME), has brought to attention the importance of establishing reliable homogenisation methods for climate data. In order to achieve that, a benchmark data set, containing monthly and daily temperature and precipitation data, was created to be used as a comparison basis for the effectiveness of those methods. Several contributions were submitted and evaluated by a number of performance metrics, validating the results against realistic inhomogeneous data. HOME also led to the development of new homogenisation software packages, which included feedback and lessons learned during the project. Preliminary studies have suggested a geostatistical stochastic approach, which uses Direct Sequential Simulation (DSS), as a promising methodology for the homogenisation of precipitation data series. Based on the spatial and temporal correlation between the neighbouring stations, DSS calculates local probability density functions at a candidate station to detect inhomogeneities. The purpose of the current study is to test and compare this geostatistical approach with the methods previously presented in the HOME project, using surrogate precipitation series from the HOME benchmark data set. The benchmark data set contains monthly precipitation surrogate series, from which annual precipitation data series were derived. These annual precipitation series were subject to exploratory analysis and to a thorough variography study. The geostatistical approach was then applied to the data set, based on different scenarios for the spatial continuity. Implementing this procedure also promoted the development of a computer program that aims to assist on the homogenisation of climate data, while minimising user interaction. Finally, in order to compare the effectiveness of this methodology with the homogenisation methods submitted during the HOME project, the obtained results were evaluated using the same performance metrics. This comparison opens new perspectives for the development of an innovative procedure based on the geostatistical stochastic approach. Acknowledgements: The authors gratefully acknowledge the financial support of "Fundação para a Ciência e Tecnologia" (FCT), Portugal, through the research project PTDC/GEO-MET/4026/2012 ("GSIMCLI - Geostatistical simulation with local distributions for the homogenization and interpolation of climate data").
ERIC Educational Resources Information Center
Thompson, Jane
2004-01-01
A conference event is mediated through keynote speeches, power point presentations, professional role-playing and the turgid language of policy agendas, initiatives, benchmarks and outputs. Serious human concerns rarely surface in the orchestrated and anodyne arena of professional conference-going. The ready recourse to ritual and procedure means…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-26
... 808 breached contracts awarded before the statutory date of enactment (General Dynamics Corp. v. U.S... 51 U.S.C. 20115. PART 31--CONTRACT COST PRINCIPLES AND PROCEDURES 0 2. Amend section 31.205-6 by-- 0... 38539
A Protein Standard That Emulates Homology for the Characterization of Protein Inference Algorithms.
The, Matthew; Edfors, Fredrik; Perez-Riverol, Yasset; Payne, Samuel H; Hoopmann, Michael R; Palmblad, Magnus; Forsström, Björn; Käll, Lukas
2018-05-04
A natural way to benchmark the performance of an analytical experimental setup is to use samples of known composition and see to what degree one can correctly infer the content of such a sample from the data. For shotgun proteomics, one of the inherent problems of interpreting data is that the measured analytes are peptides and not the actual proteins themselves. As some proteins share proteolytic peptides, there might be more than one possible causative set of proteins resulting in a given set of peptides and there is a need for mechanisms that infer proteins from lists of detected peptides. A weakness of commercially available samples of known content is that they consist of proteins that are deliberately selected for producing tryptic peptides that are unique to a single protein. Unfortunately, such samples do not expose any complications in protein inference. Hence, for a realistic benchmark of protein inference procedures, there is a need for samples of known content where the present proteins share peptides with known absent proteins. Here, we present such a standard, that is based on E. coli expressed human protein fragments. To illustrate the application of this standard, we benchmark a set of different protein inference procedures on the data. We observe that inference procedures excluding shared peptides provide more accurate estimates of errors compared to methods that include information from shared peptides, while still giving a reasonable performance in terms of the number of identified proteins. We also demonstrate that using a sample of known protein content without proteins with shared tryptic peptides can give a false sense of accuracy for many protein inference methods.
Qureshi, Ali A; Parikh, Rajiv P; Myckatyn, Terence M; Tenenbaum, Marissa M
2016-10-01
Comprehensive aesthetic surgery education is an integral part of plastic surgery residency training. Recently, the ACGME increased minimum requirements for aesthetic procedures in residency. To expand aesthetic education and prepare residents for independent practice, our institution has supported a resident cosmetic clinic for over 25 years. To evaluate the safety of procedures performed through a resident clinic by comparing outcomes to benchmarked national aesthetic surgery outcomes and to provide a model for resident clinics in academic plastic surgery institutions. We identified a consecutive cohort of patients who underwent procedures through our resident cosmetic clinic between 2010 and 2015. Major complications, as defined by CosmetAssure database, were recorded and compared to published aesthetic surgery complication rates from the CosmetAssure database for outcomes benchmarking. Fisher's exact test was used to compare sample proportions. Two hundred and seventy-one new patients were evaluated and 112 patients (41.3%) booked surgery for 175 different aesthetic procedures. There were 55 breast, 19 head and neck, and 101 trunk or extremity aesthetic procedures performed. The median number of preoperative and postoperative visits was 2 and 4 respectively with a mean follow-up time of 35 weeks. There were 3 major complications (2 hematomas and 1 infection requiring IV antibiotics) with an overall complication rate of 1.7% compared to 2.0% for patients in the CosmetAssure database (P = .45). Surgical outcomes for procedures performed through a resident cosmetic clinic are comparable to national outcomes for aesthetic surgery procedures, suggesting this experience can enhance comprehensive aesthetic surgery education without compromising patient safety or quality of care. 4 Risk. © 2016 The American Society for Aesthetic Plastic Surgery, Inc. Reprints and permission: journals.permissions@oup.com.
Mlodinow, Alexei S; Khavanin, Nima; Ver Halen, Jon P; Rambachan, Aksharananda; Gutowski, Karol A; Kim, John Y S
2015-01-01
Venous thromboembolism (VTE) is a significant cause of morbidity and mortality, particularly in the postoperative setting. Various risk stratification schema exist in the plastic surgery literature, but do not take into account variations in procedure length. The putative risk of VTE conferred by increased length of time under anaesthesia has never been rigorously explored. The goal of this study is to assess this relationship and to benchmark VTE rates in plastic surgery. A large, multi-institutional quality-improvement database was queried for plastic and reconstructive surgery procedures performed under general anaesthesia between 2005-2011. In total, 19,276 cases were abstracted from the database. Z-scores were calculated based on procedure-specific mean surgical durations, to assess each case's length in comparison to the mean for that procedure. A total of 70 patients (0.36%) experienced a post-operative VTE. Patients with and without post-operative VTE were compared with respect to a variety of demographics, comorbidities, and intraoperative characteristics. Potential confounders for VTE were included in a regression model, along with the Z-scores. VTE occurred in both cosmetic and reconstructive procedures. Longer surgery time, relative to procedural means, was associated with increased VTE rates. Further, regression analysis showed increase in Z-score to be an independent risk factor for post-operative VTE (Odds Ratio of 1.772 per unit, p-value < 0.001). Subgroup analyses corroborated these findings. This study validates the long-held view that increased surgical duration confers risk of VTE, as well as benchmarks VTE rates in plastic surgery procedures. While this in itself does not suggest an intervention, surgical time under general anaesthesia would be a useful addition to existing risk models in plastic surgery.
Equity in Medicaid Reimbursement for Otolaryngologists.
Conduff, Joseph H; Coelho, Daniel H
2017-12-01
Objective To study state Medicaid reimbursement rates for inpatient and outpatient otolaryngology services and to compare with federal Medicare benchmarks. Study Design State and federal database query. Setting Not applicable. Methods Based on Medicare claims data, 26 of the most common Current Procedural Terminology codes reimbursed to otolaryngologists were selected and the payments recorded. These were further divided into outpatient and operative services. Medicaid payment schemes were queried for the same services in 49 states and Washington, DC. The difference in Medicaid and Medicare payment in dollars and percentage was determined and the reimbursement per relative value unit calculated. Medicaid reimbursement differences (by dollar amount and by percentage) were qualified as a shortfall or excess as compared with the Medicare benchmark. Results Marked differences in Medicaid and Medicare reimbursement exist for all services provided by otolaryngologists, most commonly as a substantial shortfall. The Medicaid shortfall varied in amount among states, and great variability in reimbursement exists within and between operative and outpatient services. Operative services were more likely than outpatient services to have a greater Medicaid shortfall. Shortfalls and excesses were not consistent among procedures or states. Conclusions The variation in Medicaid payment models reflects marked differences in the value of the same work provided by otolaryngologists-in many cases, far less than federal benchmarks. These results question the fairness of the Medicaid reimbursement scheme in otolaryngology, with potential serious implications on access to care for this underserved patient population.
Investigating dye performance and crosstalk in fluorescence enabled bioimaging using a model system
Arppe, Riikka; Carro-Temboury, Miguel R.; Hempel, Casper; Vosch, Tom
2017-01-01
Detailed imaging of biological structures, often smaller than the diffraction limit, is possible in fluorescence microscopy due to the molecular size and photophysical properties of fluorescent probes. Advances in hardware and multiple providers of high-end bioimaging makes comparing images between studies and between research groups very difficult. Therefore, we suggest a model system to benchmark instrumentation, methods and staining procedures. The system we introduce is based on doped zeolites in stained polyvinyl alcohol (PVA) films: a highly accessible model system which has the properties needed to act as a benchmark in bioimaging experiments. Rather than comparing molecular probes and imaging methods in complicated biological systems, we demonstrate that the model system can emulate this complexity and can be used to probe the effect of concentration, brightness, and cross-talk of fluorophores on the detected fluorescence signal. The described model system comprises of lanthanide (III) ion doped Linde Type A zeolites dispersed in a PVA film stained with fluorophores. We tested: F18, MitoTracker Red and ATTO647N. This model system allowed comparing performance of the fluorophores in experimental conditions. Importantly, we here report considerable cross-talk of the dyes when exchanging excitation and emission settings. Additionally, bleaching was quantified. The proposed model makes it possible to test and benchmark staining procedures before these dyes are applied to more complex biological systems. PMID:29176775
How come scientists uncritically adopt and embody Thomson's bibliographic impact factor?
Porta, Miquel; Alvarez-Dardet, Carlos
2008-05-01
The bibliographic impact factor (BIF) of Thomson Scientific is sometimes not a valid scientometric indicator for a number of reasons. One major reason is the strong influence of the number of "source items" or "articles" for each journal that the company chooses each year as BIF's denominator. The irresistible fascination with (and picturesque uses of) a construct as scientifically weak as BIF are simple reminders that scientists are embedded in and embody culture.
Science and Technology Text Mining: Electrochemical Power
2003-07-14
X-RAY DIFFRACTION, TRANSMISSION ELECTRON MICROSCOPY, X- RAY PHOTOELECTRON SPECTROSCOPY, ELECTROCHEMICAL MEASUREMENTS, THERMOGRAVIMETRIC ANALYSIS ...0 -0 0 -0 0 -0 0 -0 -0 -0 0 0 thermogravimetric analysis -0 -0 0 -0 0 0 -0 -0 -0 0 0 0 -0 0 -0 0 -0 0 -0 -0 0 SEM 0 -0 0 0 -0 -0 -0 -0 0 -0 0 -0 -0 0...Capacitors; Energy Production; Power Production; Energy Conversion; Energy Storage; Citation Analysis ; Scientometrics; Military Requirements REPORT
Scientometrics of zoonoses transmitted by the giant African snail Achatina fulica Bowdich, 1822.
Pavanelli, Gilberto Cezar; Yamaguchi, Mirian Ueda; Calaça, Elaine Alves; Oda, Fabrício Hiroiuki
2017-04-13
The dissemination of the giant African snail Achatina fulica in several countries has triggered a great number of studies on the mollusk, including those on zoonoses related to health in humans. The current research is a scientific survey on articles published in four databases, namely, PubMed, Bireme, Scielo and Lilacs. Results indicate that Brazil has a prominent position in international scientific production on this subject, with focus on Angiostrongylus cantonensis occurrences.
NASA Software Engineering Benchmarking Effort
NASA Technical Reports Server (NTRS)
Godfrey, Sally; Rarick, Heather
2012-01-01
Benchmarking was very interesting and provided a wealth of information (1) We did see potential solutions to some of our "top 10" issues (2) We have an assessment of where NASA stands with relation to other aerospace/defense groups We formed new contacts and potential collaborations (1) Several organizations sent us examples of their templates, processes (2) Many of the organizations were interested in future collaboration: sharing of training, metrics, Capability Maturity Model Integration (CMMI) appraisers, instructors, etc. We received feedback from some of our contractors/ partners (1) Desires to participate in our training; provide feedback on procedures (2) Welcomed opportunity to provide feedback on working with NASA
Scientometric characterization of Medwave's scientific production 2010-2014.
Gallardo Sánchez, Yurieth; Gallardo Arzuaga, Ruber Luis; Fonseca Arias, Madelin; Pérez Atencio, María Esther
2016-09-15
The use of bibliometric indicators for the evaluation of science allows an analysis of scientific production both from a quantitative and qualitative point of view. To characterize the scientific production of Medwave during the period 2010 to 2014 in terms of visibility and productivity. A bibliometric study was carried out. Variables analyzed were offered by the Publish or Perish program working with the Google Scholar database. The number of articles published were related to the number of authors involved in each research work. The articles cited, number of citations, authors and year were reported. Indicators were obtained by placing in name of the journal and its International Standard Serial Number (ISSN) in the navigation box of Publish or Perish. There were 481 articles published with 220 citations; at a rate of more than 36 citations per year and 20 citations per author and year. An index h = 5 and index g = 6 were achieved. There was an average of two authors per article. Only five articles had more citations than the total they provided. The scientometric indicators found place the journal in a favorable position relative to other medical journals of the region, in terms of visibility and productivity. There was a low rate of cooperation since articles with individual authors prevailed. A low number of articles contributed to the productivity of the journal despite having significant number of citations.
[PhD theses on gerontological topics in Russia 1995-2012: scientometric analysis].
Smol'kin, A A; Makarova, E A
2014-01-01
The paper presents a scientometric analysis of PhD theses on gerontological topics in Russian humanities (excluding economics) for the period from 1995 to 2012. During this period, 253 PhD theses (238 of "candidate dissertations," and 15 of "doctoral dissertations") were defended in Russia. Almost half of them were defended during the boom years (2005-2006; 2009-2010). The number of theses defended in the 2000-s has increased significantly compared to the second half of 1990-s. However for gerontological PhD-s overall as a percentage of all theses defended in Russian humanities, the number hardly changed and remained small (less than 0.3%). The leading discipline in the study of aging (within the humanities) is sociology accounting for more than a third of all defended theses. Though the theses were defended in 48 cities, more than half of them were defended in 3 cities, which are Moscow, St. Petersburg and Saratov. Thematic analysis showed that the leading position was occupied by two topics: "the elderly and the state" (42%) and "(re)socialization/adaptation of the elderly" (25%). 14% of the works are devoted to intergenerational relations and social status of the elderly. Other topics (old man/woman's personality, self-perceptions of aging, violence and crime against the elderly, loneliness, discrimination, etc.) are presented by very few studies.
Schöffel, N; Kirchdörfer, M; Brüggmann, D; Bundschuh, M; Ohlendorf, D; Groneberg, D A; Bendels, M H K
2016-01-01
Sarcoidosis continues to be an underestimated disease that can cause severe morbidity and mortality in individuals. There has, however, been an increasing awareness of this disease as shown by the increasing number of publications since the 1990 s. The large number of available publications makes it challenging for a single scientist to provide an overview of the topic. To quantify the global research activity in this field, a scientometric investigation was conducted. The total number of publications on sarcoidosis was determined in the Web of Science to obtain their bibliometric data for the period 1900-2008. According to the NewQIS-protocol, different visualisation techniques and scientometric methods were applied. A total of 14,190 published items were evaluated. The U.S. takes a leading position in terms of the overall number of publications and collaborations. Prolific institutions and authors are of U.S. origin. Only a relatively small number of international co-operations were identified. The most intensive network is between the "University of Colorado" and the "National Jewish Medical Research Center". "Semenzato, G" has the highest citation rate of all authors. The most productive co-operative author is "du Bois, RM". The scientific interest in the topic sarcoidosis is growing steadily. The influence of international co-operation on scientific progress in this area is of increasing importance. © Georg Thieme Verlag KG Stuttgart · New York.
The Journal Impact Factor: Moving Toward an Alternative and Combined Scientometric Approach
Nurmashev, Bekaidar
2017-01-01
The Journal Impact Factor (JIF) is a single citation metric, which is widely employed for ranking journals and choosing target journals, but is also misused as the proxy of the quality of individual articles and academic achievements of authors. This article analyzes Scopus-based publication activity on the JIF and overviews some of the numerous misuses of the JIF, global initiatives to overcome the ‘obsession’ with impact factors, and emerging strategies to revise the concept of the scholarly impact. The growing number of articles on the JIF, most of which are in English, reflects interest of experts in journal editing and scientometrics toward its uses, misuses, and options to overcome related problems. Solely displaying values of the JIFs on the journal websites is criticized by experts as these average metrics do not reflect skewness of citation distribution of individual articles. Emerging strategies suggest to complement the JIFs with citation plots and alternative metrics, reflecting uses of individual articles in terms of downloads and distribution of related information through social media and networking platforms. It is also proposed to revise the original formula of the JIF calculation and embrace the concept of the impact and importance of individual articles. The latter is largely dependent on ethical soundness of the journal instructions, proper editing and structuring of articles, efforts to promote related information through social media, and endorsements of professional societies. PMID:28049225
The Journal Impact Factor: Moving Toward an Alternative and Combined Scientometric Approach.
Gasparyan, Armen Yuri; Nurmashev, Bekaidar; Yessirkepov, Marlen; Udovik, Elena E; Baryshnikov, Aleksandr A; Kitas, George D
2017-02-01
The Journal Impact Factor (JIF) is a single citation metric, which is widely employed for ranking journals and choosing target journals, but is also misused as the proxy of the quality of individual articles and academic achievements of authors. This article analyzes Scopus-based publication activity on the JIF and overviews some of the numerous misuses of the JIF, global initiatives to overcome the 'obsession' with impact factors, and emerging strategies to revise the concept of the scholarly impact. The growing number of articles on the JIF, most of which are in English, reflects interest of experts in journal editing and scientometrics toward its uses, misuses, and options to overcome related problems. Solely displaying values of the JIFs on the journal websites is criticized by experts as these average metrics do not reflect skewness of citation distribution of individual articles. Emerging strategies suggest to complement the JIFs with citation plots and alternative metrics, reflecting uses of individual articles in terms of downloads and distribution of related information through social media and networking platforms. It is also proposed to revise the original formula of the JIF calculation and embrace the concept of the impact and importance of individual articles. The latter is largely dependent on ethical soundness of the journal instructions, proper editing and structuring of articles, efforts to promote related information through social media, and endorsements of professional societies.
[Scientometrics and bibliometrics of biomedical engineering periodicals and papers].
Zhao, Ping; Xu, Ping; Li, Bingyan; Wang, Zhengrong
2003-09-01
This investigation was made to reveal the current status, research trend and research level of biomedical engineering in Chinese mainland by means of scientometrics and to assess the quality of the four domestic publications by bibliometrics. We identified all articles of four related publications by searching Chinese and foreign databases from 1997 to 2001. All articles collected or cited by these databases were searched and statistically analyzed for finding out the relevant distributions, including databases, years, authors, institutions, subject headings and subheadings. The source of sustentation funds and the related articles were analyzed too. The results showed that two journals were cited by two foreign databases and five Chinese databases simultaneously. The output of Journal of Biomedical Engineering was the highest. Its quantity of original papers cited by EI, CA and the totality of papers sponsored by funds were higher than those of the others, but the quantity and percentage per year of biomedical articles cited by EI were decreased in all. Inland core authors and institutions had come into being in the field of biomedical engineering. Their research topics were mainly concentrated on ten subject headings which included biocompatible materials, computer-assisted signal processing, electrocardiography, computer-assisted image processing, biomechanics, algorithms, electroencephalography, automatic data processing, mechanical stress, hemodynamics, mathematical computing, microcomputers, theoretical models, etc. The main subheadings were concentrated on instrumentation, physiopathology, diagnosis, therapy, ultrasonography, physiology, analysis, surgery, pathology, method, etc.
Setting Achievement Targets for School Children.
ERIC Educational Resources Information Center
Thanassoulis, Emmanuel
1999-01-01
Develops an approach for setting performance targets for schoolchildren, using data-envelopment analysis to identify benchmark pupils who achieve the best observed performance (allowing for contextual factors). These pupils' achievement forms the basis of targets estimated. The procedure also identifies appropriate role models for weaker students'…
Fuzzy Structures Analysis of Aircraft Panels in NASTRAN
NASA Technical Reports Server (NTRS)
Sparrow, Victor W.; Buehrle, Ralph D.
2001-01-01
This paper concerns an application of the fuzzy structures analysis (FSA) procedures of Soize to prototypical aerospace panels in MSC/NASTRAN, a large commercial finite element program. A brief introduction to the FSA procedures is first provided. The implementation of the FSA methods is then disclosed, and the method is validated by comparison to published results for the forced vibrations of a fuzzy beam. The results of the new implementation show excellent agreement to the benchmark results. The ongoing effort at NASA Langley and Penn State to apply these fuzzy structures analysis procedures to real aircraft panels is then described.
Best practices from WisDOT mega and ARRA projects--request for information : benchmarks and metrics.
DOT National Transportation Integrated Search
2012-03-01
Successful highway construction is measured by cost, time, safety, and quality. One further measure of success is the quantity of Request for Information's (RFI) submitted and their impact. An RFI is a formal written procedure initiated by the contra...
The Computerization of the National Library in Paris.
ERIC Educational Resources Information Center
Lerin, Christian; Bernard, Annick
1986-01-01
Describes the organization and automation plan of the Bibliotheque Nationale (Paris, France) that was begun in 1981. Highlights include the method of moving toward computerization; technical choices; the choosing procedure (pre-qualification, bench-mark test); short term and pilot operations; and preparation for the implementation of the…
Scientometrics of zoonoses transmitted by the giant African snail Achatina fulica Bowdich, 1822
Pavanelli, Gilberto Cezar; Yamaguchi, Mirian Ueda; Calaça, Elaine Alves; Oda, Fabrício Hiroiuki
2017-01-01
ABSTRACT The dissemination of the giant African snail Achatina fulica in several countries has triggered a great number of studies on the mollusk, including those on zoonoses related to health in humans. The current research is a scientific survey on articles published in four databases, namely, PubMed, Bireme, Scielo and Lilacs. Results indicate that Brazil has a prominent position in international scientific production on this subject, with focus on Angiostrongylus cantonensis occurrences. PMID:28423090
NASA Astrophysics Data System (ADS)
Zeng, Carl J.; Qi, Eric P.; Li, Simon S.; Stanley, H. Eugene; Ye, Fred Y.
2017-12-01
A publication that reports a breakthrough discovery in a particular scientific field is referred to as a ;black swan;, and the most highly-cited papers previously published in the same field ;white swans;. Important scientific progress occurs when ;white swans; meet a ;black swan;, and the citation patterns of the ;white swans; change. This metaphor combines scientific discoveries and scientometric data and suggests that breakthrough scientific discoveries are either ;black swans; or ;grey-black swans;.
Hierarchically Parallelized Constrained Nonlinear Solvers with Automated Substructuring
NASA Technical Reports Server (NTRS)
Padovan, Joe; Kwang, Abel
1994-01-01
This paper develops a parallelizable multilevel multiple constrained nonlinear equation solver. The substructuring process is automated to yield appropriately balanced partitioning of each succeeding level. Due to the generality of the procedure,_sequential, as well as partially and fully parallel environments can be handled. This includes both single and multiprocessor assignment per individual partition. Several benchmark examples are presented. These illustrate the robustness of the procedure as well as its capability to yield significant reductions in memory utilization and calculational effort due both to updating and inversion.
Kalkan, Erol; Kwong, Neal S.
2010-01-01
The earthquake engineering profession is increasingly utilizing nonlinear response history analyses (RHA) to evaluate seismic performance of existing structures and proposed designs of new structures. One of the main ingredients of nonlinear RHA is a set of ground-motion records representing the expected hazard environment for the structure. When recorded motions do not exist (as is the case for the central United States), or when high-intensity records are needed (as is the case for San Francisco and Los Angeles), ground motions from other tectonically similar regions need to be selected and scaled. The modal-pushover-based scaling (MPS) procedure recently was developed to determine scale factors for a small number of records, such that the scaled records provide accurate and efficient estimates of 'true' median structural responses. The adjective 'accurate' refers to the discrepancy between the benchmark responses and those computed from the MPS procedure. The adjective 'efficient' refers to the record-to-record variability of responses. Herein, the accuracy and efficiency of the MPS procedure are evaluated by applying it to four types of existing 'ordinary standard' bridges typical of reinforced-concrete bridge construction in California. These bridges are the single-bent overpass, multi span bridge, curved-bridge, and skew-bridge. As compared to benchmark analyses of unscaled records using a larger catalog of ground motions, it is demonstrated that the MPS procedure provided an accurate estimate of the engineering demand parameters (EDPs) accompanied by significantly reduced record-to-record variability of the responses. Thus, the MPS procedure is a useful tool for scaling ground motions as input to nonlinear RHAs of 'ordinary standard' bridges.
ERIC Educational Resources Information Center
Tan, Liang See; Koh, Elizabeth; Lee, Shu Shing; Ponnusamy, Letchmi Devi; Tan, Keith Chiu Kian
2017-01-01
Singapore's strong performance in international benchmarking studies--Trends in International Mathematics and Science Study (TIMSS) and Programme for International Student Assessment (PISA)--poses a conundrum to researchers who view Singapore's pedagogy as characterized by the teaching of facts and procedures, and lacking in constructivist…
Benchmarking the inelastic neutron scattering soil carbon method
USDA-ARS?s Scientific Manuscript database
The herein described inelastic neutron scattering (INS) method of measuring soil carbon was based on a new procedure for extracting the net carbon signal (NCS) from the measured gamma spectra and determination of the average carbon weight percent (AvgCw%) in the upper soil layer (~8 cm). The NCS ext...
Identifying Peer Institutions Using Cluster Analysis
ERIC Educational Resources Information Center
Boronico, Jess; Choksi, Shail S.
2012-01-01
The New York Institute of Technology's (NYIT) School of Management (SOM) wishes to develop a list of peer institutions for the purpose of benchmarking and monitoring/improving performance against other business schools. The procedure utilizes relevant criteria for the purpose of establishing this peer group by way of a cluster analysis. The…
Supporting Documentation Used in the Derivation of Selected Freshwater Tier 2 ESBs
Compilation of toxicity data used to derive secondary chronic values (SCVs) and tier 2 equilibrium partitioning sediment benchmarks (ESBs) for a selection of nonionic organic chemicals. The values are used in the following U.S. EPA document: U.S. EPA. 2008. Procedures for th...
EPA Methods 1622 and 1623 are the benchmarks for detection of Cryptosporidium spp. oocysts in water. 5-7 These methods consist of filtration, elution, purification by immunomagnetic separation (IMS), and microscopic analysis after staining with a fluorescein isothiocyanate conju...
NASA Astrophysics Data System (ADS)
Kokkoris, M.; Dede, S.; Kantre, K.; Lagoyannis, A.; Ntemou, E.; Paneta, V.; Preketes-Sigalas, K.; Provatas, G.; Vlastou, R.; Bogdanović-Radović, I.; Siketić, Z.; Obajdin, N.
2017-08-01
The evaluated proton differential cross sections suitable for the Elastic Backscattering Spectroscopy (EBS) analysis of natSi and 16O, as obtained from SigmaCalc 2.0, have been benchmarked over a wide energy and angular range at two different accelerator laboratories, namely at N.C.S.R. 'Demokritos', Athens, Greece and at Ruđer Bošković Institute (RBI), Zagreb, Croatia, using a variety of high-purity thick targets of known stoichiometry. The results are presented in graphical and tabular forms, while the observed discrepancies, as well as, the limits in accuracy of the benchmarking procedure, along with target related effects, are thoroughly discussed and analysed. In the case of oxygen the agreement between simulated and experimental spectra was generally good, while for silicon serious discrepancies were observed above Ep,lab = 2.5 MeV, suggesting that a further tuning of the appropriate nuclear model parameters in the evaluated differential cross-section datasets is required.
NASA Astrophysics Data System (ADS)
Alloui, Mebarka; Belaidi, Salah; Othmani, Hasna; Jaidane, Nejm-Eddine; Hochlaf, Majdi
2018-03-01
We performed benchmark studies on the molecular geometry, electron properties and vibrational analysis of imidazole using semi-empirical, density functional theory and post Hartree-Fock methods. These studies validated the use of AM1 for the treatment of larger systems. Then, we treated the structural, physical and chemical relationships for a series of imidazole derivatives acting as angiotensin II AT1 receptor blockers using AM1. QSAR studies were done for these imidazole derivatives using a combination of various physicochemical descriptors. A multiple linear regression procedure was used to design the relationships between molecular descriptor and the activity of imidazole derivatives. Results validate the derived QSAR model.
Ligorio, Gabriele; Bergamini, Elena; Pasciuto, Ilaria; Vannozzi, Giuseppe; Cappozzo, Aurelio; Sabatini, Angelo Maria
2016-01-01
Information from complementary and redundant sensors are often combined within sensor fusion algorithms to obtain a single accurate observation of the system at hand. However, measurements from each sensor are characterized by uncertainties. When multiple data are fused, it is often unclear how all these uncertainties interact and influence the overall performance of the sensor fusion algorithm. To address this issue, a benchmarking procedure is presented, where simulated and real data are combined in different scenarios in order to quantify how each sensor’s uncertainties influence the accuracy of the final result. The proposed procedure was applied to the estimation of the pelvis orientation using a waist-worn magnetic-inertial measurement unit. Ground-truth data were obtained from a stereophotogrammetric system and used to obtain simulated data. Two Kalman-based sensor fusion algorithms were submitted to the proposed benchmarking procedure. For the considered application, gyroscope uncertainties proved to be the main error source in orientation estimation accuracy for both tested algorithms. Moreover, although different performances were obtained using simulated data, these differences became negligible when real data were considered. The outcome of this evaluation may be useful both to improve the design of new sensor fusion methods and to drive the algorithm tuning process. PMID:26821027
Ligorio, Gabriele; Bergamini, Elena; Pasciuto, Ilaria; Vannozzi, Giuseppe; Cappozzo, Aurelio; Sabatini, Angelo Maria
2016-01-26
Information from complementary and redundant sensors are often combined within sensor fusion algorithms to obtain a single accurate observation of the system at hand. However, measurements from each sensor are characterized by uncertainties. When multiple data are fused, it is often unclear how all these uncertainties interact and influence the overall performance of the sensor fusion algorithm. To address this issue, a benchmarking procedure is presented, where simulated and real data are combined in different scenarios in order to quantify how each sensor's uncertainties influence the accuracy of the final result. The proposed procedure was applied to the estimation of the pelvis orientation using a waist-worn magnetic-inertial measurement unit. Ground-truth data were obtained from a stereophotogrammetric system and used to obtain simulated data. Two Kalman-based sensor fusion algorithms were submitted to the proposed benchmarking procedure. For the considered application, gyroscope uncertainties proved to be the main error source in orientation estimation accuracy for both tested algorithms. Moreover, although different performances were obtained using simulated data, these differences became negligible when real data were considered. The outcome of this evaluation may be useful both to improve the design of new sensor fusion methods and to drive the algorithm tuning process.
Combining Rosetta with molecular dynamics (MD): A benchmark of the MD-based ensemble protein design.
Ludwiczak, Jan; Jarmula, Adam; Dunin-Horkawicz, Stanislaw
2018-07-01
Computational protein design is a set of procedures for computing amino acid sequences that will fold into a specified structure. Rosetta Design, a commonly used software for protein design, allows for the effective identification of sequences compatible with a given backbone structure, while molecular dynamics (MD) simulations can thoroughly sample near-native conformations. We benchmarked a procedure in which Rosetta design is started on MD-derived structural ensembles and showed that such a combined approach generates 20-30% more diverse sequences than currently available methods with only a slight increase in computation time. Importantly, the increase in diversity is achieved without a loss in the quality of the designed sequences assessed by their resemblance to natural sequences. We demonstrate that the MD-based procedure is also applicable to de novo design tasks started from backbone structures without any sequence information. In addition, we implemented a protocol that can be used to assess the stability of designed models and to select the best candidates for experimental validation. In sum our results demonstrate that the MD ensemble-based flexible backbone design can be a viable method for protein design, especially for tasks that require a large pool of diverse sequences. Copyright © 2018 Elsevier Inc. All rights reserved.
Brucker, Sara Y; Schumacher, Claudia; Sohn, Christoph; Rezai, Mahdi; Bamberg, Michael; Wallwiener, Diethelm
2008-01-01
Background The main study objectives were: to establish a nationwide voluntary collaborative network of breast centres with independent data analysis; to define suitable quality indicators (QIs) for benchmarking the quality of breast cancer (BC) care; to demonstrate existing differences in BC care quality; and to show that BC care quality improved with benchmarking from 2003 to 2007. Methods BC centres participated voluntarily in a scientific benchmarking procedure. A generic XML-based data set was developed and used for data collection. Nine guideline-based quality targets serving as rate-based QIs were initially defined, reviewed annually and modified or expanded accordingly. QI changes over time were analysed descriptively. Results During 2003–2007, respective increases in participating breast centres and postoperatively confirmed BCs were from 59 to 220 and from 5,994 to 31,656 (> 60% of new BCs/year in Germany). Starting from 9 process QIs, 12 QIs were developed by 2007 as surrogates for long-term outcome. Results for most QIs increased. From 2003 to 2007, the most notable increases seen were for preoperative histological confirmation of diagnosis (58% (in 2003) to 88% (in 2007)), appropriate endocrine therapy in hormone receptor-positive patients (27 to 93%), appropriate radiotherapy after breast-conserving therapy (20 to 79%) and appropriate radiotherapy after mastectomy (8 to 65%). Conclusion Nationwide external benchmarking of BC care is feasible and successful. The benchmarking system described allows both comparisons among participating institutions as well as the tracking of changes in average quality of care over time for the network as a whole. Marked QI increases indicate improved quality of BC care. PMID:19055735
Brucker, Sara Y; Schumacher, Claudia; Sohn, Christoph; Rezai, Mahdi; Bamberg, Michael; Wallwiener, Diethelm
2008-12-02
The main study objectives were: to establish a nationwide voluntary collaborative network of breast centres with independent data analysis; to define suitable quality indicators (QIs) for benchmarking the quality of breast cancer (BC) care; to demonstrate existing differences in BC care quality; and to show that BC care quality improved with benchmarking from 2003 to 2007. BC centres participated voluntarily in a scientific benchmarking procedure. A generic XML-based data set was developed and used for data collection. Nine guideline-based quality targets serving as rate-based QIs were initially defined, reviewed annually and modified or expanded accordingly. QI changes over time were analysed descriptively. During 2003-2007, respective increases in participating breast centres and postoperatively confirmed BCs were from 59 to 220 and from 5,994 to 31,656 (> 60% of new BCs/year in Germany). Starting from 9 process QIs, 12 QIs were developed by 2007 as surrogates for long-term outcome. Results for most QIs increased. From 2003 to 2007, the most notable increases seen were for preoperative histological confirmation of diagnosis (58% (in 2003) to 88% (in 2007)), appropriate endocrine therapy in hormone receptor-positive patients (27 to 93%), appropriate radiotherapy after breast-conserving therapy (20 to 79%) and appropriate radiotherapy after mastectomy (8 to 65%). Nationwide external benchmarking of BC care is feasible and successful. The benchmarking system described allows both comparisons among participating institutions as well as the tracking of changes in average quality of care over time for the network as a whole. Marked QI increases indicate improved quality of BC care.
PFLOTRAN Verification: Development of a Testing Suite to Ensure Software Quality
NASA Astrophysics Data System (ADS)
Hammond, G. E.; Frederick, J. M.
2016-12-01
In scientific computing, code verification ensures the reliability and numerical accuracy of a model simulation by comparing the simulation results to experimental data or known analytical solutions. The model is typically defined by a set of partial differential equations with initial and boundary conditions, and verification ensures whether the mathematical model is solved correctly by the software. Code verification is especially important if the software is used to model high-consequence systems which cannot be physically tested in a fully representative environment [Oberkampf and Trucano (2007)]. Justified confidence in a particular computational tool requires clarity in the exercised physics and transparency in its verification process with proper documentation. We present a quality assurance (QA) testing suite developed by Sandia National Laboratories that performs code verification for PFLOTRAN, an open source, massively-parallel subsurface simulator. PFLOTRAN solves systems of generally nonlinear partial differential equations describing multiphase, multicomponent and multiscale reactive flow and transport processes in porous media. PFLOTRAN's QA test suite compares the numerical solutions of benchmark problems in heat and mass transport against known, closed-form, analytical solutions, including documentation of the exercised physical process models implemented in each PFLOTRAN benchmark simulation. The QA test suite development strives to follow the recommendations given by Oberkampf and Trucano (2007), which describes four essential elements in high-quality verification benchmark construction: (1) conceptual description, (2) mathematical description, (3) accuracy assessment, and (4) additional documentation and user information. Several QA tests within the suite will be presented, including details of the benchmark problems and their closed-form analytical solutions, implementation of benchmark problems in PFLOTRAN simulations, and the criteria used to assess PFLOTRAN's performance in the code verification procedure. References Oberkampf, W. L., and T. G. Trucano (2007), Verification and Validation Benchmarks, SAND2007-0853, 67 pgs., Sandia National Laboratories, Albuquerque, NM.
De Bondt, Timo; Mulkens, Tom; Zanca, Federica; Pyfferoen, Lotte; Casselman, Jan W; Parizel, Paul M
2017-02-01
To benchmark regional standard practice for paediatric cranial CT-procedures in terms of radiation dose and acquisition parameters. Paediatric cranial CT-data were retrospectively collected during a 1-year period, in 3 different hospitals of the same country. A dose tracking system was used to automatically gather information. Dose (CTDI and DLP), scan length, amount of retakes and demographic data were stratified by age and clinical indication; appropriate use of child-specific protocols was assessed. In total, 296 paediatric cranial CT-procedures were collected. Although the median dose of each hospital was below national and international diagnostic reference level (DRL) for all age categories, statistically significant (p-value < 0.001) dose differences among hospitals were observed. The hospital with lowest dose levels showed smallest dose variability and used age-stratified protocols for standardizing paediatric head exams. Erroneous selection of adult protocols for children still occurred, mostly in the oldest age-group. Even though all hospitals complied with national and international DRLs, dose tracking and benchmarking showed that further dose optimization and standardization is possible by using age-stratified protocols for paediatric cranial CT. Moreover, having a dose tracking system revealed that adult protocols are still applied for paediatric CT, a practice that must be avoided. • Significant differences were observed in the delivered dose between age-groups and hospitals. • Using age-adapted scanning protocols gives a nearly linear dose increase. • Sharing dose-data can be a trigger for hospitals to reduce dose levels.
NASA Astrophysics Data System (ADS)
Mayr, G. J.; Kneringer, P.; Dietz, S. J.; Zeileis, A.
2016-12-01
Low visibility or low cloud ceiling reduce the capacity of airports by requiring special low visibility procedures (LVP) for incoming/departing aircraft. Probabilistic forecasts when such procedures will become necessary help to mitigate delays and economic losses.We compare the performance of probabilistic nowcasts with two statistical methods: ordered logistic regression, and trees and random forests. These models harness historic and current meteorological measurements in the vicinity of the airport and LVP states, and incorporate diurnal and seasonal climatological information via generalized additive models (GAM). The methods are applied at Vienna International Airport (Austria). The performance is benchmarked against climatology, persistence and human forecasters.
Mining author relationship in scholarly networks based on tripartite citation analysis
Wang, Xiaohan; Yang, Siluo
2017-01-01
Following scholars in Scientometrics as examples, we develop five author relationship networks, namely, co-authorship, author co-citation (AC), author bibliographic coupling (ABC), author direct citation (ADC), and author keyword coupling (AKC). The time frame of data sets is divided into two periods: before 2011 (i.e., T1) and after 2011 (i.e., T2). Through quadratic assignment procedure analysis, we found that some authors have ABC or AC relationships (i.e., potential communication relationship, PCR) but do not have actual collaborations or direct citations (i.e., actual communication relationship, ACR) among them. In addition, we noticed that PCR and AKC are highly correlated and that the old PCR and the new ACR are correlated and consistent. Such facts indicate that PCR tends to produce academic exchanges based on similar themes, and ABC bears more advantages in predicting potential relations. Based on tripartite citation analysis, including AC, ABC, and ADC, we also present an author-relation mining process. Such process can be used to detect deep and potential author relationships. We analyze the prediction capacity by comparing between the T1 and T2 periods, which demonstrate that relation mining can be complementary in identifying authors based on similar themes and discovering more potential collaborations and academic communities. PMID:29117198
Benchmarking an operational procedure for rapid flood mapping and risk assessment in Europe
NASA Astrophysics Data System (ADS)
Dottori, Francesco; Salamon, Peter; Kalas, Milan; Bianchi, Alessandra; Feyen, Luc
2016-04-01
The development of real-time methods for rapid flood mapping and risk assessment is crucial to improve emergency response and mitigate flood impacts. This work describes the benchmarking of an operational procedure for rapid flood risk assessment based on the flood predictions issued by the European Flood Awareness System (EFAS). The daily forecasts produced for the major European river networks are translated into event-based flood hazard maps using a large map catalogue derived from high-resolution hydrodynamic simulations, based on the hydro-meteorological dataset of EFAS. Flood hazard maps are then combined with exposure and vulnerability information, and the impacts of the forecasted flood events are evaluated in near real-time in terms of flood prone areas, potential economic damage, affected population, infrastructures and cities. An extensive testing of the operational procedure is carried out using the catastrophic floods of May 2014 in Bosnia-Herzegovina, Croatia and Serbia. The reliability of the flood mapping methodology is tested against satellite-derived flood footprints, while ground-based estimations of economic damage and affected population is compared against modelled estimates. We evaluated the skill of flood hazard and risk estimations derived from EFAS flood forecasts with different lead times and combinations. The assessment includes a comparison of several alternative approaches to produce and present the information content, in order to meet the requests of EFAS users. The tests provided good results and showed the potential of the developed real-time operational procedure in helping emergency response and management.
[A critical perspective on the global research activity in the field of bladder cancer].
Schöffel, N; Domnitz, F; Brüggmann, D; Klingelhöfer, D; Bendels, M H K; Groneberg, D A
2016-11-01
Bladder cancer (BC) is one of the most common forms of cancer world-wide. This underestimated disease can cause severe morbidity and mortality in individuals. Increasing awareness can be depicted by the increasing numbers of publications since the 1990s. Hence, it is challenging for a scientist to obtain an overview of the topic. To quantify the global research activity in this field, a scientometric investigation was conducted. Using the database Web of Science, the bibliometric data of publications on the topic of BC was acquired for the period 1900-2007. According to the NewQIS protocol, different visualization techniques and scientometric methods were applied. A total of 19,651 publications were evaluated. The USA takes a leading position in terms of the overall number of publications, institutions, and collaborations. International collaboration on BC has changed considerably in terms of quantity during the past 20 years. The largest number of articles and the highest number of citations regarding BC are found in the Journal of Urology. Thus, it is considered the most prolific journal. Furthermore, the productivity (i. e., publication numbers) of authors and scientific impact (i. e., citation rates) vary greatly. The field of BC continues to progress, whereby the influence of international co-operation on scientific progress is of increasing importance. New evaluation factors/tools have to be established for a more reliable evaluation of scientific work.
Mahmudi, Zoleikha; Tahamtan, Iman; Sedghi, Shahram; Roudbari, Masoud
2015-01-01
We conducted a comprehensive bibliometrics analysis to calculate the H, G, M, A and R indicators for all Iranian biomedical research centers (IBRCs) from the output of ISI Web of Science (WoS) and Scopus between 1991 and 2010. We compared the research performance of the research centers according to these indicators. This was a cross-sectional and descriptive-analytical study, conducted on 104 Iranian biomedical research centers between August and September 2011. We collected our data through Scopus and WoS. Pearson correlation coefficient between the scientometrics indicators was calculated using SPSS, version 16. The mean values of all indicators were higher in Scopus than in WoS. Drug Applied Research Center of Tabriz University of Medical Sciences had the highest number of publications in both WoS and Scopus databases. This research center along with Royan Institute received the highest number of citations in both Scopus and WoS, respectively. The highest correlation was seen between G and R (.998) in WoS and between G and R (.990) in Scopus. Furthermore, the highest overlap of the 10 top IBRCs was between G and H in WoS (100%) and between G-R (90%) and H-R (90%) in Scopus. Research centers affiliated to the top ranked Iranian medical universities obtained a better position with respect to the studied scientometrics indicators. All aforementioned indicators are important for ranking bibliometrics studies as they refer to different attributes of scientific output and citation aspects.
Obesity Researches Over the Past 24 years: A Scientometrics Study in Middle East Countries.
Djalalinia, Shirin; Peykari, Niloofar; Qorbani, Mostafa; Moghaddam, Sahar Saeedi; Larijani, Bagher; Farzadfar, Farshad
2015-01-01
Researchers, practitioners, and policy-makers call for updated valid evidences to monitor, prevent, and control of alarming trends of obesity. We quantify the trends of obesity/overweight researches outputs of Middle East countries. We systematically searched Scopus database as the only sources for multidisciplinary citation reports, with the most coverage in health and biomedicine disciplines for all related obesity/overweight publications, from 1990 to 2013. These scientometrics analysis assessed the trends of scientific products, citations, and collaborative papers in Middle East countries. We also provided Information on top institutions, journals, and collaborative research centers in the field of obesity/overweight. Over 24-year period, the number of obesity/overweight publications and related citations in Middle East countries had increasing trend. Globally, during 1990-2013, 415,126 papers have been published, from them, 3.56% were affiliated to Middle East countries. Iran with 26.27%, compare with other countries in the regions, after Turkey (47.94%) and Israel (35.25%), had the third position. Israel, Turkey, and Iran were leading countries in citation analysis. The most collaborative country with Middle East countries was USA and within the region, the most collaborative country was Saudi Arabia. Despite the ascending trends in research outputs, more efforts required for promotion of collaborative partnerships. Results could be useful for better health policy and more planned studies in this field. These findings also could be used for future complementary analysis.
Evaluation of cardiac surgery mortality rates: 30-day mortality or longer follow-up?
Siregar, Sabrina; Groenwold, Rolf H H; de Mol, Bas A J M; Speekenbrink, Ron G H; Versteegh, Michel I M; Brandon Bravo Bruinsma, George J; Bots, Michiel L; van der Graaf, Yolanda; van Herwerden, Lex A
2013-11-01
The aim of our study was to investigate early mortality after cardiac surgery and to determine the most adequate follow-up period for the evaluation of mortality rates. Information on all adult cardiac surgery procedures in 10 of 16 cardiothoracic centres in Netherlands from 2007 until 2010 was extracted from the database of Netherlands Association for Cardio-Thoracic Surgery (n = 33 094). Survival up to 1 year after surgery was obtained from the national death registry. Survival analysis was performed using Kaplan-Meier and Cox regression analysis. Benchmarking was performed using logistic regression with mortality rates at different time points as dependent variables, the logistic EuroSCORE as covariate and a random intercept per centre. In-hospital mortality was 2.94% (n = 972), 30-day mortality 3.02% (n = 998), operative mortality 3.57% (n = 1181), 60-day mortality 3.84% (n = 1271), 6-month mortality 5.16% (n = 1707) and 1-year mortality 6.20% (n = 2052). The survival curves showed a steep initial decline followed by stabilization after ∼60-120 days, depending on the intervention performed, e.g. 60 days for isolated coronary artery bypass grafting (CABG) and 120 days for combined CABG and valve surgery. Benchmark results were affected by the choice of the follow-up period: four hospitals changed outlier status when the follow-up was increased from 30 days to 1 year. In the isolated CABG subgroup, benchmark results were unaffected: no outliers were found using either 30-day or 1-year follow-up. The course of early mortality after cardiac surgery differs across interventions and continues up to ∼120 days. Thirty-day mortality reflects only a part of early mortality after cardiac surgery and should only be used for benchmarking of isolated CABG procedures. The follow-up should be prolonged to capture early mortality of all types of interventions.
Practice Benchmarking in the Age of Targeted Auditing
Langdale, Ryan P.; Holland, Ben F.
2012-01-01
The frequency and sophistication of health care reimbursement auditing has progressed rapidly in recent years, leaving many oncologists wondering whether their private practices would survive a full-scale Office of the Inspector General (OIG) investigation. The Medicare Part B claims database provides a rich source of information for physicians seeking to understand how their billing practices measure up to their peers, both locally and nationally. This database was dissected by a team of cancer specialists to uncover important benchmarks related to targeted auditing. All critical Medicare charges, payments, denials, and service ratios in this article were derived from the full 2010 Medicare Part B claims database. Relevant claims were limited by using Medicare provider specialty codes 83 (hematology/oncology) and 90 (medical oncology), with an emphasis on claims filed from the physician office place of service (11). All charges, denials, and payments were summarized at the Current Procedural Terminology code level to drive practice benchmarking standards. A careful analysis of this data set, combined with the published audit priorities of the OIG, produced germane benchmarks from which medical oncologists can monitor, measure and improve on common areas of billing fraud, waste or abuse in their practices. Part II of this series and analysis will focus on information pertinent to radiation oncologists. PMID:23598847
Practice benchmarking in the age of targeted auditing.
Langdale, Ryan P; Holland, Ben F
2012-11-01
The frequency and sophistication of health care reimbursement auditing has progressed rapidly in recent years, leaving many oncologists wondering whether their private practices would survive a full-scale Office of the Inspector General (OIG) investigation. The Medicare Part B claims database provides a rich source of information for physicians seeking to understand how their billing practices measure up to their peers, both locally and nationally. This database was dissected by a team of cancer specialists to uncover important benchmarks related to targeted auditing. All critical Medicare charges, payments, denials, and service ratios in this article were derived from the full 2010 Medicare Part B claims database. Relevant claims were limited by using Medicare provider specialty codes 83 (hematology/oncology) and 90 (medical oncology), with an emphasis on claims filed from the physician office place of service (11). All charges, denials, and payments were summarized at the Current Procedural Terminology code level to drive practice benchmarking standards. A careful analysis of this data set, combined with the published audit priorities of the OIG, produced germane benchmarks from which medical oncologists can monitor, measure and improve on common areas of billing fraud, waste or abuse in their practices. Part II of this series and analysis will focus on information pertinent to radiation oncologists.
Lance, Blake W.; Smith, Barton L.
2016-06-23
Transient convection has been investigated experimentally for the purpose of providing Computational Fluid Dynamics (CFD) validation benchmark data. A specialized facility for validation benchmark experiments called the Rotatable Buoyancy Tunnel was used to acquire thermal and velocity measurements of flow over a smooth, vertical heated plate. The initial condition was forced convection downward with subsequent transition to mixed convection, ending with natural convection upward after a flow reversal. Data acquisition through the transient was repeated for ensemble-averaged results. With simple flow geometry, validation data were acquired at the benchmark level. All boundary conditions (BCs) were measured and their uncertainties quantified.more » Temperature profiles on all four walls and the inlet were measured, as well as as-built test section geometry. Inlet velocity profiles and turbulence levels were quantified using Particle Image Velocimetry. System Response Quantities (SRQs) were measured for comparison with CFD outputs and include velocity profiles, wall heat flux, and wall shear stress. Extra effort was invested in documenting and preserving the validation data. Details about the experimental facility, instrumentation, experimental procedure, materials, BCs, and SRQs are made available through this paper. As a result, the latter two are available for download and the other details are included in this work.« less
78 FR 2713 - Update to NEPA Implementing Procedures
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-14
.../Research/FinalFRA_HSR_Strat_Plan.pdf . Some of the proposed CEs were chosen from the list of categorical... list of comparative benchmarks or similar CEs currently employed by other Federal agencies. After a... ``such as'' with ``examples may include by are not limited to'' for all of the CEs. The purpose of the...
Space proton transport in one dimension
NASA Technical Reports Server (NTRS)
Lamkin, S. L.; Khandelwal, G. S.; Shinn, J. L.; Wilson, J. W.
1994-01-01
An approximate evaluation procedure is derived for a second-order theory of coupled nucleon transport in one dimension. An analytical solution with a simplified interaction model is used to determine quadrature parameters to minimize truncation error. Effects of the improved method on transport solutions with the BRYNTRN data base are evaluated. Comparisons with Monte Carlo benchmarks are given. Using different shield materials, the computational procedure is used to study the physics of space protons. A transition effect occurs in tissue near the shield interface and is most important in shields of high atomic number.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Donald L.; Hilohi, C. Michael; Spelic, David C.
2012-10-15
Purpose: To determine patient radiation doses from interventional cardiology procedures in the U.S and to suggest possible initial values for U.S. benchmarks for patient radiation dose from selected interventional cardiology procedures [fluoroscopically guided diagnostic cardiac catheterization and percutaneous coronary intervention (PCI)]. Methods: Patient radiation dose metrics were derived from analysis of data from the 2008 to 2009 Nationwide Evaluation of X-ray Trends (NEXT) survey of cardiac catheterization. This analysis used deidentified data and did not require review by an IRB. Data from 171 facilities in 30 states were analyzed. The distributions (percentiles) of radiation dose metrics were determined for diagnosticmore » cardiac catheterizations, PCI, and combined diagnostic and PCI procedures. Confidence intervals for these dose distributions were determined using bootstrap resampling. Results: Percentile distributions (advisory data sets) and possible preliminary U.S. reference levels (based on the 75th percentile of the dose distributions) are provided for cumulative air kerma at the reference point (K{sub a,r}), cumulative air kerma-area product (P{sub KA}), fluoroscopy time, and number of cine runs. Dose distributions are sufficiently detailed to permit dose audits as described in National Council on Radiation Protection and Measurements Report No. 168. Fluoroscopy times are consistent with those observed in European studies, but P{sub KA} is higher in the U.S. Conclusions: Sufficient data exist to suggest possible initial benchmarks for patient radiation dose for certain interventional cardiology procedures in the U.S. Our data suggest that patient radiation dose in these procedures is not optimized in U.S. practice.« less
Kalkan, E.; Kwong, N.
2012-01-01
The earthquake engineering profession is increasingly utilizing nonlinear response history analyses (RHA) to evaluate seismic performance of existing structures and proposed designs of new structures. One of the main ingredients of nonlinear RHA is a set of ground motion records representing the expected hazard environment for the structure. When recorded motions do not exist (as is the case in the central United States) or when high-intensity records are needed (as is the case in San Francisco and Los Angeles), ground motions from other tectonically similar regions need to be selected and scaled. The modal-pushover-based scaling (MPS) procedure was recently developed to determine scale factors for a small number of records such that the scaled records provide accurate and efficient estimates of “true” median structural responses. The adjective “accurate” refers to the discrepancy between the benchmark responses and those computed from the MPS procedure. The adjective “efficient” refers to the record-to-record variability of responses. In this paper, the accuracy and efficiency of the MPS procedure are evaluated by applying it to four types of existing Ordinary Standard bridges typical of reinforced concrete bridge construction in California. These bridges are the single-bent overpass, multi-span bridge, curved bridge, and skew bridge. As compared with benchmark analyses of unscaled records using a larger catalog of ground motions, it is demonstrated that the MPS procedure provided an accurate estimate of the engineering demand parameters (EDPs) accompanied by significantly reduced record-to-record variability of the EDPs. Thus, it is a useful tool for scaling ground motions as input to nonlinear RHAs of Ordinary Standard bridges.
Orlova, A M
2015-01-01
The elements of the scientometric survey were applied for the analysis of the character, structure, and subject-matter of the articles related to toxicological (forensic) chemistry that had been published in the journal
Schubert, András
2015-11-15
Case studies and case reports form an important and ever growing part of scientific and scholarly literature. The paper deals with the share and citation rate of these publication types on different fields of research. In general, evidence seems to support the opinion that an excessive number of such publications may negatively influence the impact factor of the journal. In the literature of scientometrics, case studies (at least the presence of the term "case study" in the titles of the papers) have a moderate share, but their citation rate is practically equal to that of other publication types.
Pharmaceutical research in the Kingdom of Saudi Arabia: A scientometric analysis during 2001–2010
Alhaider, Ibrahim; Mueen Ahmed, K.K.; Gupta, B.M.
2013-01-01
Studies on the performance of Saudi Arabia in the pharmaceutical science research using quantitative and qualitative measures. They analyze the productivity and global publication share and rank of the top 15 countries. The author studies Saudi Arabia’s publications output, growth and citation quality, international collaborative publication share and most important the collaborating partners, contribution and citation impact of its top 15 organizations and authors, productivity patterns of its top publishing journals and characteristics of its highly cited papers. PMID:26106268
Benchmarking Outcomes in the Critically Injured Burn Patient
Klein, Matthew B.; Goverman, Jeremy; Hayden, Douglas L.; Fagan, Shawn P.; McDonald-Smith, Grace P.; Alexander, Andrew K.; Gamelli, Richard L.; Gibran, Nicole S.; Finnerty, Celeste C.; Jeschke, Marc G.; Arnoldo, Brett; Wispelwey, Bram; Mindrinos, Michael N.; Xiao, Wenzhong; Honari, Shari E.; Mason, Philip H.; Schoenfeld, David A.; Herndon, David N.; Tompkins, Ronald G.
2014-01-01
Objective To determine and compare outcomes with accepted benchmarks in burn care at six academic burn centers. Background Since the 1960s, U.S. morbidity and mortality rates have declined tremendously for burn patients, likely related to improvements in surgical and critical care treatment. We describe the baseline patient characteristics and well-defined outcomes for major burn injuries. Methods We followed 300 adults and 241 children from 2003–2009 through hospitalization using standard operating procedures developed at study onset. We created an extensive database on patient and injury characteristics, anatomic and physiological derangement, clinical treatment, and outcomes. These data were compared with existing benchmarks in burn care. Results Study patients were critically injured as demonstrated by mean %TBSA (41.2±18.3 for adults and 57.8±18.2 for children) and presence of inhalation injury in 38% of the adults and 54.8% of the children. Mortality in adults was 14.1% for those less than 55 years old and 38.5% for those age ≥55 years. Mortality in patients less than 17 years old was 7.9%. Overall, the multiple organ failure rate was 27%. When controlling for age and %TBSA, presence of inhalation injury was not significant. Conclusions This study provides the current benchmark for major burn patients. Mortality rates, notwithstanding significant % TBSA and presence of inhalation injury, have significantly declined compared to previous benchmarks. Modern day surgical and medically intensive management has markedly improved to the point where we can expect patients less than 55 years old with severe burn injuries and inhalation injury to survive these devastating conditions. PMID:24722222
Laparoscopic recurrent inguinal hernia repair during the learning curve: it can be done?
Bracale, Umberto; Sciuto, Antonio; Andreuccetti, Jacopo; Merola, Giovanni; Pecchia, Leandro; Melillo, Paolo; Pignata, Giusto
2017-01-01
Trans-Abdominal Preperitoneal Patch (TAPP) repairs for Recurrent Hernia (RH) is a technically demanding procedure. It has to be performed only by surgeons with extensive experience in the laparoscopic approach. The purpose of this study is to evaluate the surgical safety and the efficacy of TAPP for RH performed in a tutoring program by surgeons in practice (SP). All TAPP repairs for RH performed by the same surgical team have been included in the study. We have evaluated the results of three SP during their learning curve in a tutoring program. Then these results have been compared to those of a highly experienced laparoscopic surgeon (Benchmark). A total of 530 TAPP repairs have been performed. Among these, 83 TAPP have been executed for RH, of which 43 by the Benchmark and 40 by the SP. When we have compared the outcomes of the Benchmark with those of SP, no significant difference has been observed about morbidity and recurrence while the operative time has been significantly longer for the SP. No intraoperative complications have occurred. International guidelines urge that TAPP repair for RH has to be performed only by surgeons with extensive experience in the laparoscopic approach. The results of the present study demonstrate that TAPP for RH could be performed also by surgeons in training during a learning program. We retain that an adequate tutoring program could lead a surgeon in practice to perform more complex hernia procedures without jeopardizing patient safety throughout the learning curve period. Laparoscopy, Learning Curve, Recurrent Hernia.
Internet-based monitoring and benchmarking in ambulatory surgery centers.
Bovbjerg, V E; Olchanski, V; Zimberg, S E; Green, J S; Rossiter, L F
2000-08-01
Each year the number of surgical procedures performed on an outpatient basis increases, yet relatively little is known about assessing and improving quality of care in ambulatory surgery. Conventional methods for evaluating outcomes, which are based on assessment of inpatient services, are inadequate in the rapidly changing, geographically dispersed field of ambulatory surgery. Internet-based systems for improving outcomes and establishing benchmarks may be feasible and timely. Eleven freestanding ambulatory surgery centers (ASCs) reported process and outcome data for 3,966 outpatient surgical procedures to an outcomes monitoring system (OMS), during a demonstration period from April 1997 to April 1999. ASCs downloaded software and protocol manuals from the OMS Web site. Centers securely submitted clinical information on perioperative process and outcome measures and postoperative patient telephone interviews. Feedback to centers ranged from current and historical rates of surgical and postsurgical complications to patient satisfaction and the adequacy of postsurgical pain relief. ASCs were able to successfully implement the data collection protocols and transmit data to the OMS. Data security efforts were successful in preventing the transmission of patient identifiers. Feedback reports to ASCs were used to institute changes in ASC staffing, patient care, and patient education, as well as for accreditation and marketing. The demonstration also pointed out shortcomings in the OMS, such as the need to simplify hardware and software installation as well as data collection and transfer methods, which have been addressed in subsequent OMS versions. Internet-based benchmarking for geographically dispersed outpatient health care facilities, such as ASCs, is feasible and likely to play a major role in this effort.
Construct Validity of Fresh Frozen Human Cadaver as a Training Model in Minimal Access Surgery
Macafee, David; Pranesh, Nagarajan; Horgan, Alan F.
2012-01-01
Background: The construct validity of fresh human cadaver as a training tool has not been established previously. The aims of this study were to investigate the construct validity of fresh frozen human cadaver as a method of training in minimal access surgery and determine if novices can be rapidly trained using this model to a safe level of performance. Methods: Junior surgical trainees, novices (<3 laparoscopic procedure performed) in laparoscopic surgery, performed 10 repetitions of a set of structured laparoscopic tasks on fresh frozen cadavers. Expert laparoscopists (>100 laparoscopic procedures) performed 3 repetitions of identical tasks. Performances were scored using a validated, objective Global Operative Assessment of Laparoscopic Skills scale. Scores for 3 consecutive repetitions were compared between experts and novices to determine construct validity. Furthermore, to determine if the novices reached a safe level, a trimmed mean of the experts score was used to define a benchmark. Mann-Whitney U test was used for construct validity analysis and 1-sample t test to compare performances of the novice group with the benchmark safe score. Results: Ten novices and 2 experts were recruited. Four out of 5 tasks (nondominant to dominant hand transfer; simulated appendicectomy; intracorporeal and extracorporeal knot tying) showed construct validity. Novices’ scores became comparable to benchmark scores between the eighth and tenth repetition. Conclusion: Minimal access surgical training using fresh frozen human cadavers appears to have construct validity. The laparoscopic skills of novices can be accelerated through to a safe level within 8 to 10 repetitions. PMID:23318058
Brainstorming: weighted voting prediction of inhibitors for protein targets.
Plewczynski, Dariusz
2011-09-01
The "Brainstorming" approach presented in this paper is a weighted voting method that can improve the quality of predictions generated by several machine learning (ML) methods. First, an ensemble of heterogeneous ML algorithms is trained on available experimental data, then all solutions are gathered and a consensus is built between them. The final prediction is performed using a voting procedure, whereby the vote of each method is weighted according to a quality coefficient calculated using multivariable linear regression (MLR). The MLR optimization procedure is very fast, therefore no additional computational cost is introduced by using this jury approach. Here, brainstorming is applied to selecting actives from large collections of compounds relating to five diverse biological targets of medicinal interest, namely HIV-reverse transcriptase, cyclooxygenase-2, dihydrofolate reductase, estrogen receptor, and thrombin. The MDL Drug Data Report (MDDR) database was used for selecting known inhibitors for these protein targets, and experimental data was then used to train a set of machine learning methods. The benchmark dataset (available at http://bio.icm.edu.pl/∼darman/chemoinfo/benchmark.tar.gz ) can be used for further testing of various clustering and machine learning methods when predicting the biological activity of compounds. Depending on the protein target, the overall recall value is raised by at least 20% in comparison to any single machine learning method (including ensemble methods like random forest) and unweighted simple majority voting procedures.
NASA Astrophysics Data System (ADS)
Moriarty, Patrick; Sanz Rodrigo, Javier; Gancarski, Pawel; Chuchfield, Matthew; Naughton, Jonathan W.; Hansen, Kurt S.; Machefaux, Ewan; Maguire, Eoghan; Castellani, Francesco; Terzi, Ludovico; Breton, Simon-Philippe; Ueda, Yuko
2014-06-01
Researchers within the International Energy Agency (IEA) Task 31: Wakebench have created a framework for the evaluation of wind farm flow models operating at the microscale level. The framework consists of a model evaluation protocol integrated with a web-based portal for model benchmarking (www.windbench.net). This paper provides an overview of the building-block validation approach applied to wind farm wake models, including best practices for the benchmarking and data processing procedures for validation datasets from wind farm SCADA and meteorological databases. A hierarchy of test cases has been proposed for wake model evaluation, from similarity theory of the axisymmetric wake and idealized infinite wind farm, to single-wake wind tunnel (UMN-EPFL) and field experiments (Sexbierum), to wind farm arrays in offshore (Horns Rev, Lillgrund) and complex terrain conditions (San Gregorio). A summary of results from the axisymmetric wake, Sexbierum, Horns Rev and Lillgrund benchmarks are used to discuss the state-of-the-art of wake model validation and highlight the most relevant issues for future development.
Brüggmann, Dörthe; Kollascheck, Jana; Quarcoo, David; Bendels, Michael H; Klingelhöfer, Doris; Louwen, Frank; Jaque, Jenny M; Groneberg, David A
2017-01-01
Objective About 2% of all pregnancies are complicated by the implantation of the zygote outside the uterine cavity and termed ectopic pregnancy. Whereas a multitude of guidelines exists and related research is constantly growing, no thorough assessment of the global research architecture has been performed yet. Hence, we aim to assess the associated scientific activities in relation to geographical and chronological developments, existing research networks and socioeconomic parameters. Design Retrospective, descriptive study. Setting On the basis of the NewQIS platform, scientometric methods were combined with novel visualising techniques such as density-equalising mapping to assess the scientific output on ectopic pregnancy. Using the Web of Science, we identified all related entries from 1900 to 2012. Results 8040 publications were analysed. The USA and the UK were dominating the field in regard to overall research activity (2612 and 723 publications), overall citation numbers and country-specific H-Indices (US: 80, UK: 42). Comparison to economic power of the most productive countries demonstrated that Israel invested more resources in ectopic pregnancy-related research than other nations (853.41 ectopic pregnancy-specific publications per 1000 billlion US$ gross domestic product (GDP)), followed by the UK (269.97). Relation to the GDP per capita index revealed 49.3 ectopic pregnancy-specific publications per US$1000 GDP per capita for the USA in contrast to 17.31 for the UK. Semiqualitative indices such as country-specific citation rates ranked Switzerland first (24.7 citations per ectopic pregnancy-specific publication), followed by the Scandinavian countries Finland and Sweden. Low-income countries did not exhibit significant research activities. Conclusions This is the first in-depth analysis of global ectopic pregnancy research since 1900. It offers unique insights into the global scientific landscape. Besides the USA and the UK, Scandinavian countries and Switzerland can also be regarded as leading nations with regard to their relative socioeconomic input. PMID:29025848
Brüggmann, Dörthe; Köster, Corinna; Klingelhöfer, Doris; Bauer, Jan; Ohlendorf, Daniela; Bundschuh, Matthias; Groneberg, David A
2017-07-26
Worldwide, the respiratory syncytial virus (RSV) represents the predominant viral agent causing bronchiolitis and pneumonia in children. To conduct research and tackle existing healthcare disparities, RSV-related research activities around the globe need to be described. Hence, we assessed the associated scientific output (represented by research articles) by geographical, chronological and socioeconomic criteria and analysed the authors publishing in the field by gender. Also, the 15 most cited articles and the most prolific journals were identified for RSV research. Retrospective, descriptive study. The NewQIS (New Quality and Quantity Indices in Science) platform was employed to identify RSV-related articles published in the Web of Science until 2013. We performed a numerical analysis of all articles, and examined citation-based aspects (eg, citation rates); results were visualised by density equalising mapping tools. We identified 4600 RSV-related articles. The USA led the field; US-American authors published 2139 articles (46.5%% of all identified articles), which have been cited 83 000 times. When output was related to socioeconomic benchmarks such as gross domestic product or Research and Development expenditures, Guinea-Bissau, The Gambia and Chile were ranked in leading positions. A total of 614 articles on RSV (13.34% of all articles) were attributed to scientific collaborations. These were primarily established between high-income countries. The gender analysis indicated that male scientists dominated in all countries except Brazil. The majority of RSV-related research articles originated from high-income countries whereas developing nations showed only minimal publication productivity and were barely part of any collaborative networks. Hence, research capacity in these nations should be increased in order to assist in addressing inequities in resource allocation and the clinical burden of RSV in these countries. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
An automated benchmarking platform for MHC class II binding prediction methods.
Andreatta, Massimo; Trolle, Thomas; Yan, Zhen; Greenbaum, Jason A; Peters, Bjoern; Nielsen, Morten
2018-05-01
Computational methods for the prediction of peptide-MHC binding have become an integral and essential component for candidate selection in experimental T cell epitope discovery studies. The sheer amount of published prediction methods-and often discordant reports on their performance-poses a considerable quandary to the experimentalist who needs to choose the best tool for their research. With the goal to provide an unbiased, transparent evaluation of the state-of-the-art in the field, we created an automated platform to benchmark peptide-MHC class II binding prediction tools. The platform evaluates the absolute and relative predictive performance of all participating tools on data newly entered into the Immune Epitope Database (IEDB) before they are made public, thereby providing a frequent, unbiased assessment of available prediction tools. The benchmark runs on a weekly basis, is fully automated, and displays up-to-date results on a publicly accessible website. The initial benchmark described here included six commonly used prediction servers, but other tools are encouraged to join with a simple sign-up procedure. Performance evaluation on 59 data sets composed of over 10 000 binding affinity measurements suggested that NetMHCIIpan is currently the most accurate tool, followed by NN-align and the IEDB consensus method. Weekly reports on the participating methods can be found online at: http://tools.iedb.org/auto_bench/mhcii/weekly/. mniel@bioinformatics.dtu.dk. Supplementary data are available at Bioinformatics online.
Source-term development for a contaminant plume for use by multimedia risk assessment models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whelan, Gene; McDonald, John P.; Taira, Randal Y.
1999-12-01
Multimedia modelers from the U.S. Environmental Protection Agency (EPA) and the U.S. Department of Energy (DOE) are collaborating to conduct a comprehensive and quantitative benchmarking analysis of four intermedia models: DOE's Multimedia Environmental Pollutant Assessment System (MEPAS), EPA's MMSOILS, EPA's PRESTO, and DOE's RESidual RADioactivity (RESRAD). These models represent typical analytically, semi-analytically, and empirically based tools that are utilized in human risk and endangerment assessments for use at installations containing radioactive and/or hazardous contaminants. Although the benchmarking exercise traditionally emphasizes the application and comparison of these models, the establishment of a Conceptual Site Model (CSM) should be viewed with equalmore » importance. This paper reviews an approach for developing a CSM of an existing, real-world, Sr-90 plume at DOE's Hanford installation in Richland, Washington, for use in a multimedia-based benchmarking exercise bet ween MEPAS, MMSOILS, PRESTO, and RESRAD. In an unconventional move for analytically based modeling, the benchmarking exercise will begin with the plume as the source of contamination. The source and release mechanism are developed and described within the context of performing a preliminary risk assessment utilizing these analytical models. By beginning with the plume as the source term, this paper reviews a typical process and procedure an analyst would follow in developing a CSM for use in a preliminary assessment using this class of analytical tool.« less
Mahmudi, Zoleikha; Tahamtan, Iman; Sedghi, Shahram; Roudbari, Masoud
2015-01-01
Background: We conducted a comprehensive bibliometrics analysis to calculate the H, G, M, A and R indicators for all Iranian biomedical research centers (IBRCs) from the output of ISI Web of Science (WoS) and Scopus between 1991 and 2010. We compared the research performance of the research centers according to these indicators. Methods: This was a cross-sectional and descriptive-analytical study, conducted on 104 Iranian biomedical research centers between August and September 2011. We collected our data through Scopus and WoS. Pearson correlation coefficient between the scientometrics indicators was calculated using SPSS, version 16. Results: The mean values of all indicators were higher in Scopus than in WoS. Drug Applied Research Center of Tabriz University of Medical Sciences had the highest number of publications in both WoS and Scopus databases. This research center along with Royan Institute received the highest number of citations in both Scopus and WoS, respectively. The highest correlation was seen between G and R (.998) in WoS and between G and R (.990) in Scopus. Furthermore, the highest overlap of the 10 top IBRCs was between G and H in WoS (100%) and between G-R (90%) and H-R (90%) in Scopus. Conclusion: Research centers affiliated to the top ranked Iranian medical universities obtained a better position with respect to the studied scientometrics indicators. All aforementioned indicators are important for ranking bibliometrics studies as they refer to different attributes of scientific output and citation aspects. PMID:26478875
Diabetes research in Middle East countries; a scientometrics study from 1990 to 2012.
Peykari, Niloofar; Djalalinia, Shirin; Kasaeian, Amir; Naderimagham, Shohreh; Hasannia, Tahereh; Larijani, Bagher; Farzadfar, Farshad
2015-03-01
Diabetes burden is a serious warning for urgent action plan across the world. Knowledge production in this context could provide evidences for more efficient interventions. Aimed to that, we quantify the trend of diabetes research outputs of Middle East countries focusing on the scientific publication numbers, citations, and international collaboration. This scientometrics study was performed based on the systematic analysis through three international databases; ISI, PubMed, and Scopus from 1990 to 2012. International collaboration of Middle East countries and citations was analyzed based on Scopus. Diabetes' publications in Iran specifically were assessed, and frequent used terms were mapped by VOSviewer software. Over 23-year period, the number of diabetes publications and related citations in Middle East countries had increasing trend. The number of articles on diabetes in ISI, PubMed, and Scopus were respectively; 13,994, 11,336, and 20,707. Turkey, Israel, Iran, Saudi Arabia, and Egypt have devoted the five top competition positions. In addition, Israel, Turkey, and Iran were leading countries in citation analysis. The most collaborative country with Middle East countries was USA and within the region, the most collaborative country was Saudi Arabia. Iran in all databases stands on third position and produced 12.7% of diabetes publications within region. Regarding diabetes researches, the frequent used terms in Iranian articles were "effect," "woman," and "metabolic syndrome." Ascending trend of diabetes research outputs in Middle East countries is appreciated but encouraging to strategic planning for maintaining this trend, and more collaboration between researchers is needed to regional health promotion.
Mustard gas exposure in Iran-Iraq war - A scientometric study.
Nokhodian, Zary; ZareFarashbandi, Firoozeh; Shoaei, Parisa
2015-01-01
The Iranian victims of sulfur mustard attack are now more than 20 years post-exposure and form a valuable cohort for studying the chronic effects of an exposure to sulfur mustard. Articles on sulfur mustard exposure in Iran-Iraq war were reviewed using three known international databases such as Scopus, Medline, and ISI. The objectives of the study were measurement of the author-wise distribution, year-wise distribution, subject area wise, and assessment of highly cited articles. We searched three known international databases, Scopus, Medline, and the international statistical institute (ISI), for articles related to mustard gas exposure in Iran-Iraq war, published between 1988 and 2012. The results were analyzed using scientometric methods. During the 24 years under examination, about 90 papers were published in the field of mustard gas in Iran-Iraq war. Original article was the most used document type forming 51.4% of all the publications. The number of articles devoted to mustard gas and Iran-Iraq war research increased more than 10-fold, from 1 in 1988 to 11 in 2011. Most of the published articles (45.7%) included clinical and paraclinical investigations of sulfur mustard in Iranian victims. The most highly productive author was Ghanei who occupied the first rank in the number of publications with 20 papers. The affiliation of most of the researchers was Baqiyatallah Medical Sciences University (research center of chemical injuries and dermatology department) in Iran. This article has highlighted the quantitative share of Iran in articles on sulfur mustard and lays the groundwork for further research on various aspects of related problems.
Carbajal-de-la-Fuente, Ana Laura; Yadón, Zaida E.
2013-01-01
The Special Programme for Research and Training in Tropical Diseases (TDR) is an independent global programme of scientific collaboration cosponsored by the United Nations Children's Fund, the United Nations Development Program, the World Bank, and the World Health Organization. TDR's strategy is based on stewardship for research on infectious diseases of poverty, empowerment of endemic countries, research on neglected priority needs, and the promotion of scientific collaboration influencing global efforts to combat major tropical diseases. In 2001, in view of the achievements obtained in the reduction of transmission of Chagas disease through the Southern Cone Initiative and the improvement in Chagas disease control activities in some countries of the Andean and the Central American Initiatives, TDR transferred the Chagas Disease Implementation Research Programme (CIRP) to the Communicable Diseases Unit of the Pan American Health Organization (CD/PAHO). This paper presents a scientometric evaluation of the 73 projects from 18 Latin American and European countries that were granted by CIRP/PAHO/TDR between 1997 and 2007. We analyzed all final reports of the funded projects and scientific publications, technical reports, and human resource training activities derived from them. Results about the number of projects funded, countries and institutions involved, gender analysis, number of published papers in indexed scientific journals, main topics funded, patents inscribed, and triatomine species studied are presented and discussed. The results indicate that CIRP/PAHO/TDR initiative has contributed significantly, over the 1997–2007 period, to Chagas disease knowledge as well as to the individual and institutional-building capacity. PMID:24244761
Evaluating the current state of the art of Huntington disease research: a scientometric analysis
Barboza, L.A.; Ghisi, N.C.
2018-01-01
Huntington disease (HD) is an incurable neurodegenerative disorder caused by a dominant mutation on the 4th chromosome. We aim to present a scientometric analysis of the extant scientific undertakings devoted to better understanding HD. Therefore, a quantitative study was performed to examine the current state-of-the-art approaches that foster researchers’ understandings of the current knowledge, research trends, and research gaps regarding this disorder. We performed literature searches of articles that were published up to September 2016 in the “ISI Web of Science™” (http://apps.webofknowledge.com/). The keyword used was “Huntington disease”. Of the initial 14,036 articles that were obtained, 7732 were eligible for inclusion in the study according to their relevance. Data were classified according to language, country of publication, year, and area of concentration. The country leader regarding the number of studies published on HD is the United States, accounting for nearly 30% of all publications, followed by England and Germany, who have published 10 and 7% of all publications, respectively. Regarding the language in which the articles were written, 98% of publications were in English. The first publication to be found on HD was published in 1974. A surge of publications on HD can be seen from 1996 onward. In relation to the various knowledge areas that emerged, most publications were in the fields of neuroscience and neurology, likely because HD is a neurodegenerative disorder. Publications written in areas such as psychiatry, genetics, and molecular biology also predominated. PMID:29340519
Scientometric analyses of studies on the role of innate variation in athletic performance.
Lombardo, Michael P; Emiah, Shadie
2014-01-01
Historical events have produced an ideologically charged atmosphere in the USA surrounding the potential influences of innate variation on athletic performance. We tested the hypothesis that scientific studies of the role of innate variation in athletic performance were less likely to have authors with USA addresses than addresses elsewhere because of this cultural milieu. Using scientometric data collected from 290 scientific papers published in peer-reviewed journals from 2000-2012, we compared the proportions of authors with USA addresses with those that listed addresses elsewhere that studied the relationships between athletic performance and (a) prenatal exposure to androgens, as indicated by the ratio between digits 2 and 4, and (b) the genotypes for angiotensin converting enzyme, α-actinin-3, and myostatin; traits often associated with athletic performance. Authors with USA addresses were disproportionately underrepresented on papers about the role of innate variation in athletic performance. We searched NIH and NSF databases for grant proposals solicited or funded from 2000-2012 to determine if the proportion of authors that listed USA addresses was associated with funding patterns. NIH did not solicit grant proposals designed to examine these factors in the context of athletic performance and neither NIH nor NSF funded grants designed to study these topics. We think the combined effects of a lack of government funding and the avoidance of studying controversial or non-fundable topics by USA based scientists are responsible for the observation that authors with USA addresses were underrepresented on scientific papers examining the relationships between athletic performance and innate variation.
NASA Astrophysics Data System (ADS)
Jiang, J.; Kaloti, A. P.; Levinson, H. R.; Nguyen, N.; Puckett, E. G.; Lokavarapu, H. V.
2016-12-01
We present the results of three standard benchmarks for the new active tracer particle algorithm in ASPECT. The three benchmarks are SolKz, SolCx, and SolVI (also known as the 'inclusion benchmark') first proposed by Duretz, May, Gerya, and Tackley (G Cubed, 2011) and in subsequent work by Theilman, May, and Kaus (Pure and Applied Geophysics, 2014). Each of the three benchmarks compares the accuracy of the numerical solution to a steady (time-independent) solution of the incompressible Stokes equations with a known exact solution. These benchmarks are specifically designed to test the accuracy and effectiveness of the numerical method when the viscosity varies up to six orders of magnitude. ASPECT has been shown to converge to the exact solution of each of these benchmarks at the correct design rate when all of the flow variables, including the density and viscosity, are discretized on the underlying finite element grid (Krobichler, Heister, and Bangerth, GJI, 2012). In our work we discretize the density and viscosity by initially placing the true values of the density and viscosity at the intial particle positions. At each time step, including the initialization step, the density and viscosity are interpolated from the particles onto the finite element grid. The resulting Stokes system is solved for the velocity and pressure, and the particle positions are advanced in time according to this new, numerical, velocity field. Note that this procedure effectively changes a steady solution of the Stokes equaton (i.e., one that is independent of time) to a solution of the Stokes equations that is time dependent. Furthermore, the accuracy of the active tracer particle algorithm now also depends on the accuracy of the interpolation algorithm and of the numerical method one uses to advance the particle positions in time. Finally, we will present new interpolation algorithms designed to increase the overall accuracy of the active tracer algorithms in ASPECT and interpolation algotithms designed to conserve properties, such as mass density, that are being carried by the particles.
Wirtz, Veronika J; Santa-Ana-Tellez, Yared; Trout, Clinton H; Kaplan, Warren A
2012-12-01
Public sector price analyses of antiretroviral (ARV) medicines can provide relevant information to detect ARV procurement procedures that do not obtain competitive market prices. Price benchmarks provide a useful tool for programme managers and policy makers to support such planning and policy measures. The aim of the study was to develop regional and global price benchmarks which can be used to analyse public-sector price variability of ARVs in low- and middle-income countries using the procurement prices of Latin America and the Caribbean (LAC) countries in 2008 as an example. We used the Global Price Reporting Mechanism (GPRM) data base, provided by the World Health Organization (WHO), for 13 LAC countries' ARV procurements to analyse the procurement prices of four first-line and three second-line ARV combinations in 2008. First, a cross-sectional analysis was conducted to compare ARV combination prices. Second, four different price 'benchmarks' were created and we estimated the additional number of patients who could have been treated in each country if the ARV combinations studied were purchased at the various reference ('benchmark') prices. Large price variations exist for first- and second-line ARV combinations between countries in the LAC region. Most countries in the LAC region could be treating between 1.17 and 3.8 times more patients if procurement prices were closer to the lowest regional generic price. For all second-line combinations, a price closer to the lowest regional innovator prices or to the global median transaction price for lower-middle-income countries would also result in treating up to nearly five times more patients. Some rational allocation of financial resources due, in part, to price benchmarking and careful planning by policy makers and programme managers can assist a country in negotiating lower ARV procurement prices and should form part of a sustainable procurement policy.
Pseudo-updated constrained solution algorithm for nonlinear heat conduction
NASA Technical Reports Server (NTRS)
Tovichakchaikul, S.; Padovan, J.
1983-01-01
This paper develops efficiency and stability improvements in the incremental successive substitution (ISS) procedure commonly used to generate the solution to nonlinear heat conduction problems. This is achieved by employing the pseudo-update scheme of Broyden, Fletcher, Goldfarb and Shanno in conjunction with the constrained version of the ISS. The resulting algorithm retains the formulational simplicity associated with ISS schemes while incorporating the enhanced convergence properties of slope driven procedures as well as the stability of constrained approaches. To illustrate the enhanced operating characteristics of the new scheme, the results of several benchmark comparisons are presented.
Human Benchmarking of Expert Systems. Literature Review
1990-01-01
effetiveness of the development procedures used in order to predict whether the aplication of similar approaches will likely have effective and...they used in their learning and problem solving. We will describe these approaches later. Reasoning. Reasoning usually includes inference. Because to ... in the software engineering process. For example, existing approaches to software evaluation in the military are based on a model of conventional
Benchmarking GNU Radio Kernels and Multi-Processor Scheduling
2013-01-14
AMD E350 APU , comparable to Atom • ARM Cortex A8 running on a Gumstix Overo on an Ettus USRP E110 The general testing procedure consists of • Build...Intel Atom, and the AMD E350 APU . 3.2 Multi-Processor Scheduling Figure 1: GFLOPs per second through an FFT array on an Intel i7. Example output from
Using business intelligence to manage supply costs.
Bunata, Ernest
2013-08-01
Business intelligence tools can help materials managers and managers in the operating room and procedural areas track purchasing costs more precisely and determine the root causes of cost increases. Data can be shared with physicians to increase their awareness of the cost of physician preference items. Proper use of business intelligence goes beyond price benchmarking to manage price performance over time.
Xiao, Fengjun; Li, Chengzhi; Sun, Jiangman; Zhang, Lianjie
2017-01-01
To study the rapid growth of research on organic photovoltaic (OPV) technology, development trends in the relevant research are analyzed based on CiteSpace software of text mining and visualization in scientific literature. By this analytical method, the outputs and cooperation of authors, the hot research topics, the vital references and the development trend of OPV are identified and visualized. Different from the traditional review articles by the experts on OPV, this work provides a new method of visualizing information about the development of the OPV technology research over the past decade quantitatively.
NASA Astrophysics Data System (ADS)
Xiao, Fengjun; Li, Chengzhi; Sun, Jiangman; Zhang, Lianjie
2017-09-01
To study the rapid growth of research on organic photovoltaic (OPV) technology, development trends in the relevant research are analyzed based on CiteSpace software of text mining and visualization in scientific literature. By this analytical method, the outputs and cooperation of authors, the hot research topics, the vital references and the development trend of OPV are identified and visualized. Different from the traditional review articles by the experts on OPV, this work provides a new method of visualizing information about the development of the OPV technology research over the past decade quantitatively.
Study of Scientific Production of Community Medicines' Department Indexed in ISI Citation Databases.
Khademloo, Mohammad; Khaseh, Ali Akbar; Siamian, Hasan; Aligolbandi, Kobra; Latifi, Mahsoomeh; Yaminfirooz, Mousa
2016-10-01
In the scientometric, the main criterion in determining the scientific position and ranking of the scientific centers, particularly the universities, is the rate of scientific production and innovation, and in all participations in the global scientific development. One of the subjects more involved in repeatedly dealt with science and technology and effective on the improvement of health is medical science fields. In this research using scientometric and citation analysis, we studied the rate of scientific productions in the field of community medicine, which is the numbers of articles published and indexed in ISI database from 2000 to 2010. This study is scientometric using the survey and analytical citation. The study samples included all of the articles in the ISI database from 2000 to 2010. For the data collection, the advance method of searching was used at the ISI database. The ISI analyses software and descriptive statistics were used for data analysis. Results showed that among the five top universities in producing documents, Tehran University of Medical Sciences with 88 (22.22%) documents are allocated to the first rank of scientific products. M. Askarian with 36 (90/9%) published documents; most of the scientific outputs in Community medicine, in the international arena is the most active author in this field. In collaboration with other writers, Iranian departments of Community Medicine with 27 published articles have the greatest participation with scholars of English authors. In the process of scientific outputs, the results showed that the scientific process was in its lowest in the years 2000 to 2004, and while the department of Community medicine in 2009 allocated most of the production process to itself. Iranian Journal of Public Health and Saudi Medical Journal each of them had 16 articles which had most participation rate in the publishing of community medicine's department. On the type of carrier, community medicine's department by presentation of 340(85.86%) articles had presented most of their scientific productions in the format of article, also in the field of community medicine outputs, article entitled: "Iron loading and erythrophagocytosis increase ferroportin 1 (FPN1) expression in J774 macrophages"(1) with 81 citations ranked first in cited articles. Subject areas of occupational health with 70 articles and subject areas of general medicine with 69 articles ranked the most active research areas in the Production of community medicine's department. the obtained data showed the much growth of scientific production. The Tehran University of medical Sciences ranked the first in publishing articles in community medicine's department and with most collaboration with community medicine department of England writers in this field and most writers will present their works in paper format.
Study of Scientific Production of Community Medicines’ Department Indexed in ISI Citation Databases
Khademloo, Mohammad; Khaseh, Ali Akbar; Siamian, Hasan; Aligolbandi, Kobra; Latifi, Mahsoomeh; Yaminfirooz, Mousa
2016-01-01
Background: In the scientometric, the main criterion in determining the scientific position and ranking of the scientific centers, particularly the universities, is the rate of scientific production and innovation, and in all participations in the global scientific development. One of the subjects more involved in repeatedly dealt with science and technology and effective on the improvement of health is medical science fields. In this research using scientometric and citation analysis, we studied the rate of scientific productions in the field of community medicine, which is the numbers of articles published and indexed in ISI database from 2000 to 2010. Methods: This study is scientometric using the survey and analytical citation. The study samples included all of the articles in the ISI database from 2000 to 2010. For the data collection, the advance method of searching was used at the ISI database. The ISI analyses software and descriptive statistics were used for data analysis. Results: Results showed that among the five top universities in producing documents, Tehran University of Medical Sciences with 88 (22.22%) documents are allocated to the first rank of scientific products. M. Askarian with 36 (90/9%) published documents; most of the scientific outputs in Community medicine, in the international arena is the most active author in this field. In collaboration with other writers, Iranian departments of Community Medicine with 27 published articles have the greatest participation with scholars of English authors. In the process of scientific outputs, the results showed that the scientific process was in its lowest in the years 2000 to 2004, and while the department of Community medicine in 2009 allocated most of the production process to itself. Iranian Journal of Public Health and Saudi Medical Journal each of them had 16 articles which had most participation rate in the publishing of community medicine’s department. On the type of carrier, community medicine’s department by presentation of 340(85.86%) articles had presented most of their scientific productions in the format of article, also in the field of community medicine outputs, article entitled: “Iron loading and erythrophagocytosis increase ferroportin 1 (FPN1) expression in J774 macrophages”(1) with 81 citations ranked first in cited articles. Subject areas of occupational health with 70 articles and subject areas of general medicine with 69 articles ranked the most active research areas in the Production of community medicine’s department. Conclusion: the obtained data showed the much growth of scientific production. The Tehran University of medical Sciences ranked the first in publishing articles in community medicine’s department and with most collaboration with community medicine department of England writers in this field and most writers will present their works in paper format. PMID:28077896
Waveform distortion by 2-step modeling ground vibration from trains
NASA Astrophysics Data System (ADS)
Wang, F.; Chen, W.; Zhang, J.; Li, F.; Liu, H.; Chen, X.; Pan, Y.; Li, G.; Xiao, F.
2017-10-01
The 2-step procedure is widely used in numerical research on ground vibrations from trains. The ground is inconsistently represented by a simplified model in the first step and by a refined model in the second step, which may lead to distortions in the simulation results. In order to reveal this modeling error, time histories of ground-borne vibrations were computed based on the 2-step procedure and then compared with the results from a benchmark procedure of the whole system. All parameters involved were intentionally set as equal for the 2 methods, which ensures that differences in the results originated from the inconsistencies of the ground model. Excited by wheel loads of low speeds such as 60 km/h and low frequencies less than 8 Hz, the computed responses of the subgrade were quite close to the benchmarks. However, notable distortions were found in all loading cases at higher frequencies. Moreover, significant underestimation of intensity occurred when load frequencies equaled 16 Hz. This occurred not only at the subgrade but also at the points 10 m and 20 m away from the track. When the load speed was increased to 350 km/h, all computed waveforms were distorted, including the responses to the loads at very low frequencies. The modeling error found herein suggests that the ground models in the 2 steps should be calibrated in terms of frequency bands to be investigated, and the speed of train should be taken into account at the same time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crowhurst, James A, E-mail: jimcrowhurst@hotmail.com; School of Medicine, University of Queensland, St. Lucia, Brisbane, Queensland; Whitby, Mark
Radiation dose to patients undergoing invasive coronary angiography (ICA) is relatively high. Guidelines suggest that a local benchmark or diagnostic reference level (DRL) be established for these procedures. This study sought to create a DRL for ICA procedures in Queensland public hospitals. Data were collected for all Cardiac Catheter Laboratories in Queensland public hospitals. Data were collected for diagnostic coronary angiography (CA) and single-vessel percutaneous intervention (PCI) procedures. Dose area product (P{sub KA}), skin surface entrance dose (K{sub AR}), fluoroscopy time (FT), and patient height and weight were collected for 3 months. The DRL was set from the 75th percentilemore » of the P{sub KA.} 2590 patients were included in the CA group where the median FT was 3.5 min (inter-quartile range = 2.3–6.1). Median K{sub AR} = 581 mGy (374–876). Median P{sub KA} = 3908 uGym{sup 2} (2489–5865) DRL = 5865 uGym{sup 2}. 947 patients were included in the PCI group where median FT was 11.2 min (7.7–17.4). Median K{sub AR} = 1501 mGy (928–2224). Median P{sub KA} = 8736 uGym{sup 2} (5449–12,900) DRL = 12,900 uGym{sup 2}. This study established a benchmark for radiation dose for diagnostic and interventional coronary angiography in Queensland public facilities.« less
ERIC Educational Resources Information Center
Ng, Daniel; Supaporn, Potibut
A study investigated the trend of current U.S. television commercial informativeness by comparing the results with Alan Resnik and Bruce Stern's previous benchmark study conducted in 1977. A systematic random sampling procedure was used to select viewing dates and times of commercials from the three national networks. Ultimately, a total of 550…
ERIC Educational Resources Information Center
Camp, Carole Ann, Ed.
This booklet, one of six in the Living Things Science series, presents activities about cells which address basic "Benchmarks" suggested by the American Association for the Advancement of Science for the Living Environment for grades 3-5. Contents include background information, vocabulary (in English and Spanish), materials, procedures,…
Demand Forecasting: An Evaluation of DODs Accuracy Metric and Navys Procedures
2016-06-01
inventory management improvement plan, mean of absolute scaled error, lead time adjusted squared error, forecast accuracy, benchmarking, naïve method...Manager JASA Journal of the American Statistical Association LASE Lead-time Adjusted Squared Error LCI Life Cycle Indicator MA Moving Average MAE...Mean Squared Error xvi NAVSUP Naval Supply Systems Command NDAA National Defense Authorization Act NIIN National Individual Identification Number
LU Factorization with Partial Pivoting for a Multi-CPU, Multi-GPU Shared Memory System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurzak, Jakub; Luszczek, Pitior; Faverge, Mathieu
2012-03-01
LU factorization with partial pivoting is a canonical numerical procedure and the main component of the High Performance LINPACK benchmark. This article presents an implementation of the algorithm for a hybrid, shared memory, system with standard CPU cores and GPU accelerators. Performance in excess of one TeraFLOPS is achieved using four AMD Magny Cours CPUs and four NVIDIA Fermi GPUs.
An Online Tool for Global Benchmarking of Risk-Adjusted Surgical Outcomes.
Spence, Richard T; Chang, David C; Chu, Kathryn; Panieri, Eugenio; Mueller, Jessica L; Hutter, Matthew M
2017-01-01
Increasing evidence demonstrates significant variation in adverse outcomes following surgery between countries. In order to better quantify these variations, we hypothesize that freely available online risk calculators can be used as a tool to generate global benchmarking of risk-adjusted surgical outcomes. This is a prospective cohort study conducted at an academic teaching hospital in South Africa (GSH). Consecutive adult patients undergoing major general or vascular surgery who met the ACS-NSQIP inclusion criteria for a 3-month period were included. Data variables required by the ACS risk calculator were prospectively collected, and patients were followed for 30 days post-surgery for the occurrence of endpoints. Calculating observed-to-expected ratios for ten outcome measures of interest generated risk-adjusted outcomes benchmarked against the ACS-NSQIP consortium. A total of 373 major general and vascular surgery procedures met the inclusion criteria. The GSH operative cohort varied significantly compared to the 2012 ACS-NSQIP database. The risk-adjusted O/E ratios were significant for any complication O/E 1.91 (95 % CI 1.57-2.31), surgical site infections O/E 4.76 (95 % CI 3.71-6.01), renal failure O/E 3.29 (95 % CI 1.50-6.24), death O/E 3.43 (95 % CI 2.19-5.11), and total length of stay (LOS) O/E 3.43 (95 % CI 2.19-5.11). Freely available online risk calculators can be utilized as tools for global benchmarking of risk-adjusted surgical outcomes.
Parameter regimes for a single sequential quantum repeater
NASA Astrophysics Data System (ADS)
Rozpędek, F.; Goodenough, K.; Ribeiro, J.; Kalb, N.; Caprara Vivoli, V.; Reiserer, A.; Hanson, R.; Wehner, S.; Elkouss, D.
2018-07-01
Quantum key distribution allows for the generation of a secret key between distant parties connected by a quantum channel such as optical fibre or free space. Unfortunately, the rate of generation of a secret key by direct transmission is fundamentally limited by the distance. This limit can be overcome by the implementation of so-called quantum repeaters. Here, we assess the performance of a specific but very natural setup called a single sequential repeater for quantum key distribution. We offer a fine-grained assessment of the repeater by introducing a series of benchmarks. The benchmarks, which should be surpassed to claim a working repeater, are based on finite-energy considerations, thermal noise and the losses in the setup. In order to boost the performance of the studied repeaters we introduce two methods. The first one corresponds to the concept of a cut-off, which reduces the effect of decoherence during the storage of a quantum state by introducing a maximum storage time. Secondly, we supplement the standard classical post-processing with an advantage distillation procedure. Using these methods, we find realistic parameters for which it is possible to achieve rates greater than each of the benchmarks, guiding the way towards implementing quantum repeaters.
Dodge-Khatami, Ali; Chancellor, William Z; Gupta, Bhawna; Seals, Samantha R; Ebeid, Makram R; Batlivala, Sarosh P; Taylor, Mary B; Salazar, Jorge D
2015-07-01
Results of surgical management of hypoplastic left heart syndrome (HLHS) and related anomalies are often compared to published benchmark data which reflect the use of a variety of surgical and hybrid protocols. We report encouraging results achieved in an emerging program, despite a learning curve at all care levels. Rather than relying on a single preferred protocol, surgical management was based on matching surgical strategy to individual patient factors. From 2010 to 2014, a total of 47 consecutive patients with HLHS or related anomalies with ductal-dependent systemic circulation underwent initial surgical palliation, including 30 Norwood stage I, 8 hybrid stage I, and 9 salvage-to-Norwood procedures. True hybrid procedures entailed bilateral pulmonary artery banding and ductal stenting. In the salvage-to-Norwood strategy, ductal stenting was withheld in favor of continued prostaglandin infusion in anticipation of a deferred Norwood procedure. Cardiac comorbidities (obstructed pulmonary venous return, poor ventricular function, and atrioventricular valve regurgitation) and noncardiac comorbidities influenced the choice of treatment strategies and were analyzed as potential risk factors for extracorporeal membrane oxygenation (ECMO) support or in-hospital mortality. Overall hospital survival was 81% (Norwood 83.3%, hybrid 88%, "salvage" 67%; P = .4942). Extracorporeal membrane oxygenation support was used for eight (17%) patients with two survivors. For cases with obstructed pulmonary venous return (n = 10, 21%), management choices favored a hybrid or salvage strategy (P = .0026). Aortic atresia (n = 22, 47%) was treated by a Norwood or salvage-to-Norwood. No cardiac, noncardiac, or genetic comorbidities were identified as independent risk factors for ECMO or discharge mortality in a multivariable analysis. Our emerging program achieved outcomes that compare favorably to published benchmark data with respect to hospital survival. These results reflect rigorous interdisciplinary teamwork and a flexible approach to surgical palliation based on matching surgical strategy to patient factors. With major associated cardiac/noncardiac comorbidity and antegrade coronary flow, a true hybrid with ductal stenting was our preferred strategy. For high-risk situations such as aortic atresia with obstructed pulmonary venous return, the salvage hybrid-bridge-to-Norwood strategy may help achieve survival albeit with increased resource utilization. © The Author(s) 2015.
Benchmarking of software tools for optical proximity correction
NASA Astrophysics Data System (ADS)
Jungmann, Angelika; Thiele, Joerg; Friedrich, Christoph M.; Pforr, Rainer; Maurer, Wilhelm
1998-06-01
The point when optical proximity correction (OPC) will become a routine procedure for every design is not far away. For such a daily use the requirements for an OPC tool go far beyond the principal functionality of OPC that was proven by a number of approaches and is documented well in literature. In this paper we first discuss the requirements for a productive OPC tool. Against these requirements a benchmarking was performed with three different OPC tools available on market (OPRX from TVT, OPTISSIMO from aiss and PROTEUS from TMA). Each of these tools uses a different approach to perform the correction (rules, simulation or model). To assess the accuracy of the correction, a test chip was fabricated, which contains corrections done by each software tool. The advantages and weakness of the several solutions are discussed.
Galileo probe forebody thermal protection - Benchmark heating environment calculations
NASA Technical Reports Server (NTRS)
Balakrishnan, A.; Nicolet, W. E.
1981-01-01
Solutions are presented for the aerothermal heating environment for the forebody heatshield of candidate Galileo probe. Entry into both the nominal and cool-heavy model atmospheres were considered. Solutions were obtained for the candidate heavy probe with a weight of 310 kg and a lighter probe with a weight of 290 kg. In the flowfield analysis, a finite difference procedure was employed to obtain benchmark predictions of pressure, radiative and convective heating rates, and the steady-state wall blowing rates. Calculated heating rates for entry into the cool-heavy model atmosphere were about 60 percent higher than those predicted for the entry into the nominal atmosphere. The total mass lost for entry into the cool-heavy model atmosphere was about 146 kg and the mass lost for entry into the nominal model atmosphere was about 101 kg.
Assessment of the MPACT Resonance Data Generation Procedure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Kang Seog; Williams, Mark L.
Currently, heterogeneous models are being used to generate resonance self-shielded cross-section tables as a function of background cross sections for important nuclides such as 235U and 238U by performing the CENTRM (Continuous Energy Transport Model) slowing down calculation with the MOC (Method of Characteristics) spatial discretization and ESSM (Embedded Self-Shielding Method) calculations to obtain background cross sections. And then the resonance self-shielded cross section tables are converted into subgroup data which are to be used in estimating problem-dependent self-shielded cross sections in MPACT (Michigan Parallel Characteristics Transport Code). Although this procedure has been developed and thus resonance data have beenmore » generated and validated by benchmark calculations, assessment has never been performed to review if the resonance data are properly generated by the procedure and utilized in MPACT. This study focuses on assessing the procedure and a proper use in MPACT.« less
Evaluation of dynamical models: dissipative synchronization and other techniques.
Aguirre, Luis Antonio; Furtado, Edgar Campos; Tôrres, Leonardo A B
2006-12-01
Some recent developments for the validation of nonlinear models built from data are reviewed. Besides giving an overall view of the field, a procedure is proposed and investigated based on the concept of dissipative synchronization between the data and the model, which is very useful in validating models that should reproduce dominant dynamical features, like bifurcations, of the original system. In order to assess the discriminating power of the procedure, four well-known benchmarks have been used: namely, Duffing-Ueda, Duffing-Holmes, and van der Pol oscillators, plus the Hénon map. The procedure, developed for discrete-time systems, is focused on the dynamical properties of the model, rather than on statistical issues. For all the systems investigated, it is shown that the discriminating power of the procedure is similar to that of bifurcation diagrams--which in turn is much greater than, say, that of correlation dimension--but at a much lower computational cost.
Sino-Canadian collaborations in stem cell research: a scientometric analysis.
Ali-Khan, Sarah E; Ray, Monali; McMahon, Dominique S; Thorsteinsdóttir, Halla
2013-01-01
International collaboration (IC) is essential for the advance of stem cell research, a field characterized by marked asymmetries in knowledge and capacity between nations. China is emerging as a global leader in the stem cell field. However, knowledge on the extent and characteristics of IC in stem cell science, particularly China's collaboration with developed economies, is lacking. We provide a scientometric analysis of the China-Canada collaboration in stem cell research, placing this in the context of other leading producers in the field. We analyze stem cell research published from 2006 to 2010 from the Scopus database, using co-authored papers as a proxy for collaboration. We examine IC levels, collaboration preferences, scientific impact, the collaborating institutions in China and Canada, areas of mutual interest, and funding sources. Our analysis shows rapid global expansion of the field with 48% increase in papers from 2006 to 2010. China now ranks second globally after the United States. China has the lowest IC rate of countries examined, while Canada has one of the highest. China-Canada collaboration is rising steadily, more than doubling during 2006-2010. China-Canada collaboration enhances impact compared to papers authored solely by China-based researchers This difference remained significant even when comparing only papers published in English. While China is increasingly courted in IC by developed countries as a partner in stem cell research, it is clear that it has reached its status in the field largely through domestic publications. Nevertheless, IC enhances the impact of stem cell research in China, and in the field in general. This study establishes an objective baseline for comparison with future studies, setting the stage for in-depth exploration of the dynamics and genesis of IC in stem cell research.
Djalalinia, Shirin; Peykari, Niloofar; Eftekhari, Monir Baradaran; Sobhani, Zahra; Laali, Reza; Qorbani, Omid Ali; Akhondzadeh, Shahin; Malekzadeh, Reza; Ebadifar, Asghar
2017-01-01
Researchers, practitioners, and policymakers call for updated valid evidence to monitor, prevent, and control of alarming trends of health problems. To respond to these needs, health researches provide the vast multidisciplinary scientific fields. We quantify the national trends of health research outputs and its contribution in total science products. We systematically searched Scopus database with the most coverage in health and biomedicine discipline as the only sources for multidisciplinary citation reports, for all total and health-related publications, from 2000 to 2014. These scientometrics analyses covered the trends of main index of scientific products, citations, and collaborative papers. We also provided information on top institutions, journals, and collaborative research centers in the fields of health researches. In Iran, over a 15-year period, 237,056 scientific papers have been published, of which 81,867 (34.53%) were assigned to health-related fields. Pearson's Chi-square test showed significant time trends between published papers and their citations. Tehran University of Medical Sciences was responsible for 21.87% of knowledge productions share. The second and the third ranks with 11.15% and 7.28% belonged to Azad University and Shahid Beheshti University of Medical Sciences, respectively. In total fields, Iran had the most collaborative papers with the USA (4.17%), the UK (2.41%), and Canada (0.02%). In health-related papers, similar patterns of collaboration followed by 4.75%, 2.77%, and 1.93% of papers. Despite the ascending trends in health research outputs, more efforts required for the promotion of collaborative outputs that cause synergy of resources and the use of practical results. These analyses also could be useful for better planning and management of planning and conducting studies in these fields.
Evolution of primary care databases in UK: a scientometric analysis of research output.
Vezyridis, Paraskevas; Timmons, Stephen
2016-10-11
To identify publication and citation trends, most productive institutions and countries, top journals, most cited articles and authorship networks from articles that used and analysed data from primary care databases (CPRD, THIN, QResearch) of pseudonymised electronic health records (EHRs) in UK. Descriptive statistics and scientometric tools were used to analyse a SCOPUS data set of 1891 articles. Open access software was used to extract networks from the data set (Table2Net), visualise and analyse coauthorship networks of scholars and countries (Gephi) and density maps (VOSviewer) of research topics co-occurrence and journal cocitation. Research output increased overall at a yearly rate of 18.65%. While medicine is the main field of research, studies in more specialised areas include biochemistry and pharmacology. Researchers from UK, USA and Spanish institutions have published the most papers. Most of the journals that publish this type of research and most cited papers come from UK and USA. Authorship varied between 3 and 6 authors. Keyword analyses show that smoking, diabetes, cardiovascular diseases and mental illnesses, as well as medication that can treat such medical conditions, such as non-steroid anti-inflammatory agents, insulin and antidepressants constitute the main topics of research. Coauthorship network analyses show that lead scientists, directors or founders of these databases are, to various degrees, at the centre of clusters in this scientific community. There is a considerable increase of publications in primary care research from EHRs. The UK has been well placed at the centre of an expanding global scientific community, facilitating international collaborations and bringing together international expertise in medicine, biochemical and pharmaceutical research. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Zarei, Mozhdeh; Bagheri-Saweh, Mohammad Iraj; Rasolabadi, Masoud; Vakili, Ronak; Seidi, Jamal; Kalhor, Marya Maryam; Etaee, Farshid; Gharib, Alireza
2017-02-01
As a common type of malignancy, breast cancer is one of the major causes of death in women globally. The purpose of the current study was to analyze Iran research performance on Breast Cancer in the context of national and international studies, shown in the publications indexed in Scopus database during 1991-2015. Data were retrieved from the Scopus citation database in this scientometric study. The following string was employed; "breast cancer OR breast malignancy OR breast tumor OR mammary ductal carcinoma" keywords in the main title, abstract and keywords and Iran in the affiliation field were the main related keywords. The terms used were searched in Scopus using the tab specified for searching documents. Time span analyzed was 1991 to 2015 inclusive. Using the analyzing software of Scopus, we analyzed the results. Iran's increasing publication production during 1991-2015 in breast cancer research which indexed in Scopus, consists of 2,399 papers with an average of 95.96 papers per year, and achieved an h-index of 48. Iranian cancer research articles have received 15,574 citations during 1991-2015, and average citations per paper were 6.49. Iran ranked 27th among the top 30 nations with a worldwide stake of 0.67 %, the 20 top publishing journals published 744 (31%) Iranian research articles on breast cancer, among them, there were 15 Iranian journals. The number of Iranian research papers on breast cancer and also the number of citations to them, is increasing. Although the quantity and quality of papers are increasing, regarding the prevalence of breast cancer in Iran and also the ineffectiveness of screening programs in the early detection of the cases, more effort should be made, and Iranian policy makers should consider more investment on breast cancer research.
Sino-Canadian Collaborations in Stem Cell Research: A Scientometric Analysis
Ali-Khan, Sarah E.; Ray, Monali; McMahon, Dominique S.; Thorsteinsdóttir, Halla
2013-01-01
Background International collaboration (IC) is essential for the advance of stem cell research, a field characterized by marked asymmetries in knowledge and capacity between nations. China is emerging as a global leader in the stem cell field. However, knowledge on the extent and characteristics of IC in stem cell science, particularly China’s collaboration with developed economies, is lacking. Methods and Findings We provide a scientometric analysis of the China–Canada collaboration in stem cell research, placing this in the context of other leading producers in the field. We analyze stem cell research published from 2006 to 2010 from the Scopus database, using co-authored papers as a proxy for collaboration. We examine IC levels, collaboration preferences, scientific impact, the collaborating institutions in China and Canada, areas of mutual interest, and funding sources. Our analysis shows rapid global expansion of the field with 48% increase in papers from 2006 to 2010. China now ranks second globally after the United States. China has the lowest IC rate of countries examined, while Canada has one of the highest. China–Canada collaboration is rising steadily, more than doubling during 2006–2010. China–Canada collaboration enhances impact compared to papers authored solely by China-based researchers This difference remained significant even when comparing only papers published in English. Conclusions While China is increasingly courted in IC by developed countries as a partner in stem cell research, it is clear that it has reached its status in the field largely through domestic publications. Nevertheless, IC enhances the impact of stem cell research in China, and in the field in general. This study establishes an objective baseline for comparison with future studies, setting the stage for in-depth exploration of the dynamics and genesis of IC in stem cell research. PMID:23468927
Diabetes research in Middle East countries; a scientometrics study from 1990 to 2012
Peykari, Niloofar; Djalalinia, Shirin; Kasaeian, Amir; Naderimagham, Shohreh; Hasannia, Tahereh; Larijani, Bagher; Farzadfar, Farshad
2015-01-01
Background: Diabetes burden is a serious warning for urgent action plan across the world. Knowledge production in this context could provide evidences for more efficient interventions. Aimed to that, we quantify the trend of diabetes research outputs of Middle East countries focusing on the scientific publication numbers, citations, and international collaboration. Materials and Methods: This scientometrics study was performed based on the systematic analysis through three international databases; ISI, PubMed, and Scopus from 1990 to 2012. International collaboration of Middle East countries and citations was analyzed based on Scopus. Diabetes’ publications in Iran specifically were assessed, and frequent used terms were mapped by VOSviewer software. Results: Over 23-year period, the number of diabetes publications and related citations in Middle East countries had increasing trend. The number of articles on diabetes in ISI, PubMed, and Scopus were respectively; 13,994, 11,336, and 20,707. Turkey, Israel, Iran, Saudi Arabia, and Egypt have devoted the five top competition positions. In addition, Israel, Turkey, and Iran were leading countries in citation analysis. The most collaborative country with Middle East countries was USA and within the region, the most collaborative country was Saudi Arabia. Iran in all databases stands on third position and produced 12.7% of diabetes publications within region. Regarding diabetes researches, the frequent used terms in Iranian articles were “effect,” “woman,” and “metabolic syndrome.” Conclusion: Ascending trend of diabetes research outputs in Middle East countries is appreciated but encouraging to strategic planning for maintaining this trend, and more collaboration between researchers is needed to regional health promotion. PMID:26109972
Multiply scaled constrained nonlinear equation solvers. [for nonlinear heat conduction problems
NASA Technical Reports Server (NTRS)
Padovan, Joe; Krishna, Lala
1986-01-01
To improve the numerical stability of nonlinear equation solvers, a partitioned multiply scaled constraint scheme is developed. This scheme enables hierarchical levels of control for nonlinear equation solvers. To complement the procedure, partitioned convergence checks are established along with self-adaptive partitioning schemes. Overall, such procedures greatly enhance the numerical stability of the original solvers. To demonstrate and motivate the development of the scheme, the problem of nonlinear heat conduction is considered. In this context the main emphasis is given to successive substitution-type schemes. To verify the improved numerical characteristics associated with partitioned multiply scaled solvers, results are presented for several benchmark examples.
NASA Technical Reports Server (NTRS)
Padovan, J.; Tovichakchaikul, S.
1983-01-01
This paper will develop a new solution strategy which can handle elastic-plastic-creep problems in an inherently stable manner. This is achieved by introducing a new constrained time stepping algorithm which will enable the solution of creep initiated pre/postbuckling behavior where indefinite tangent stiffnesses are encountered. Due to the generality of the scheme, both monotone and cyclic loading histories can be handled. The presentation will give a thorough overview of current solution schemes and their short comings, the development of constrained time stepping algorithms as well as illustrate the results of several numerical experiments which benchmark the new procedure.
Finite element analysis of wrinkling membranes
NASA Technical Reports Server (NTRS)
Miller, R. K.; Hedgepeth, J. M.; Weingarten, V. I.; Das, P.; Kahyai, S.
1984-01-01
The development of a nonlinear numerical algorithm for the analysis of stresses and displacements in partly wrinkled flat membranes, and its implementation on the SAP VII finite-element code are described. A comparison of numerical results with exact solutions of two benchmark problems reveals excellent agreement, with good convergence of the required iterative procedure. An exact solution of a problem involving axisymmetric deformations of a partly wrinkled shallow curved membrane is also reported.
NASA Astrophysics Data System (ADS)
Brinkerhoff, D. J.; Johnson, J. V.
2013-07-01
We introduce a novel, higher order, finite element ice sheet model called VarGlaS (Variational Glacier Simulator), which is built on the finite element framework FEniCS. Contrary to standard procedure in ice sheet modelling, VarGlaS formulates ice sheet motion as the minimization of an energy functional, conferring advantages such as a consistent platform for making numerical approximations, a coherent relationship between motion and heat generation, and implicit boundary treatment. VarGlaS also solves the equations of enthalpy rather than temperature, avoiding the solution of a contact problem. Rather than include a lengthy model spin-up procedure, VarGlaS possesses an automated framework for model inversion. These capabilities are brought to bear on several benchmark problems in ice sheet modelling, as well as a 500 yr simulation of the Greenland ice sheet at high resolution. VarGlaS performs well in benchmarking experiments and, given a constant climate and a 100 yr relaxation period, predicts a mass evolution of the Greenland ice sheet that matches present-day observations of mass loss. VarGlaS predicts a thinning in the interior and thickening of the margins of the ice sheet.
Security in Intelligent Transport Systems for Smart Cities: From Theory to Practice.
Javed, Muhammad Awais; Ben Hamida, Elyes; Znaidi, Wassim
2016-06-15
Connecting vehicles securely and reliably is pivotal to the implementation of next generation ITS applications of smart cities. With continuously growing security threats, vehicles could be exposed to a number of service attacks that could put their safety at stake. To address this concern, both US and European ITS standards have selected Elliptic Curve Cryptography (ECC) algorithms to secure vehicular communications. However, there is still a lack of benchmarking studies on existing security standards in real-world settings. In this paper, we first analyze the security architecture of the ETSI ITS standard. We then implement the ECC based digital signature and encryption procedures using an experimental test-bed and conduct an extensive benchmark study to assess their performance which depends on factors such as payload size, processor speed and security levels. Using network simulation models, we further evaluate the impact of standard compliant security procedures in dense and realistic smart cities scenarios. Obtained results suggest that existing security solutions directly impact the achieved quality of service (QoS) and safety awareness of vehicular applications, in terms of increased packet inter-arrival delays, packet and cryptographic losses, and reduced safety awareness in safety applications. Finally, we summarize the insights gained from the simulation results and discuss open research challenges for efficient working of security in ITS applications of smart cities.
PROCEDURES FOR THE DERIVATION OF EQUILIBRIUM ...
This equilibrium partitioning sediment benchmark (ESB) document describes procedures to derive concentrations for 32 nonionic organic chemicals in sediment which are protective of the presence of freshwater and marine benthic organisms. The equilibrium partitioning (EqP) approach was chosen because it accounts for the varying biological availability of chemicals in different sediments and allows for the incorporation of the appropriate biological effects concentration. This provides for the derivation of benchmarks that are causally linked to the specific chemical, applicable across sediments, and appropriately protective of benthic organisms. EqP can be used to calculate ESBs for any toxicity endpoint for which there are water-only toxicity data; it is not limited to any single effect endpoint. For the purposes of this document, ESBs for 32 nonionic organic chemicals, including several low molecular weight aliphatic and aromatic compounds, pesticides, and phthalates, were derived using Final Chronic Values (FCV) from Water Quality Criteria (WQC) or Secondary Chronic Values (SCV) derived from existing toxicological data using the Great Lakes Water Quality Initiative (GLI) or narcosis theory approaches. These values are intended to be the concentration of each chemical in water that is protective of the presence of aquatic life. For nonionic organic chemicals demonstrating a narcotic mode of action, ESBs derived using the GLI approach specifically for fres
Referees Often Miss Obvious Errors in Computer and Electronic Publications
NASA Astrophysics Data System (ADS)
de Gloucester, Paul Colin
2013-05-01
Misconduct is extensive and damaging. So-called science is prevalent. Articles resulting from so-called science are often cited in other publications. This can have damaging consequences for society and for science. The present work includes a scientometric study of 350 articles (published by the Association for Computing Machinery; Elsevier; The Institute of Electrical and Electronics Engineers, Inc.; John Wiley; Springer; Taylor & Francis; and World Scientific Publishing Co.). A lower bound of 85.4% articles are found to be incongruous. Authors cite inherently self-contradictory articles more than valid articles. Incorrect informational cascades ruin the literature's signal-to-noise ratio even for uncomplicated cases.
Georges, Patrick
2017-01-01
This paper proposes a statistical analysis that captures similarities and differences between classical music composers with the eventual aim to understand why particular composers 'sound' different even if their 'lineages' (influences network) are similar or why they 'sound' alike if their 'lineages' are different. In order to do this we use statistical methods and measures of association or similarity (based on presence/absence of traits such as specific 'ecological' characteristics and personal musical influences) that have been developed in biosystematics, scientometrics, and bibliographic coupling. This paper also represents a first step towards a more ambitious goal of developing an evolutionary model of Western classical music.
Scientometric Study of Doctoral Theses of the Physical Research Laboratory
NASA Astrophysics Data System (ADS)
Anilkumar, N.
2010-10-01
This paper presents the results of a study of bibliographies compiled from theses submitted in the period 2001-2005. The bibliographies have been studied to find out how research carried out at PRL is being used by the doctoral students. Resources are categorized by type of resource — book, journal article, proceedings, doctoral thesis, etc., to understand the usage of content procured by the library. The period of the study, 2001-2005, has been chosen because technology is changing so fast and so are the formats of scholarly communications. For the sake of convenience, only the "e-journals period" is considered for the sample.
Orlova, A M
2016-01-01
The author presents the results of the analysis of the publications concerning toxicological (forensic) chemistry issues published in the journal "Sudebno-meditsinskaya ekspertiza" during the period from 2004 to 2013 with their assessment making use of scientometrical methods. Special emphasis is laid on the publications devoted to the development and improvement of the approaches to the investigation into narcotic and psychotropic drugs as well as other toxic substances. Specific features of such investigations are described.
Referees often miss obvious errors in computer and electronic publications.
de Gloucester, Paul Colin
2013-01-01
Misconduct is extensive and damaging. So-called science is prevalent. Articles resulting from so-called science are often cited in other publications. This can have damaging consequences for society and for science. The present work includes a scientometric study of 350 articles (published by the Association for Computing Machinery; Elsevier; The Institute of Electrical and Electronics Engineers, Inc.; John Wiley; Springer; Taylor & Francis; and World Scientific Publishing Co.). A lower bound of 85.4% articles are found to be incongruous. Authors cite inherently self-contradictory articles more than valid articles. Incorrect informational cascades ruin the literature's signal-to-noise ratio even for uncomplicated cases.
Clinical audit of leg ulceration prevalence in a community area: a case study of good practice.
Hindley, Jenny
2014-09-01
This article presents the findings of an audit on venous leg ulceration prevalence in a community area as a framework for discussing the concept and importance of audit as a tool to inform practice and as a means to benchmark care against national or international standards. It is hoped that the discussed audit will practically demonstrate how such procedures can be implemented in practice for those who have not yet undertaken it, as well as highlighting the unexpected extra benefits of this type of qualitative data collection that can often unexpectedly inform practice and influence change. Audit can be used to measure, monitor and disseminate evidence-based practice across community localities, facilitating the identification of learning needs and the instigation of clinical change, thereby prioritising patient needs by ensuring safety through the benchmarking of clinical practice.
FDNS CFD Code Benchmark for RBCC Ejector Mode Operation
NASA Technical Reports Server (NTRS)
Holt, James B.; Ruf, Joe
1999-01-01
Computational Fluid Dynamics (CFD) analysis results are compared with benchmark quality test data from the Propulsion Engineering Research Center's (PERC) Rocket Based Combined Cycle (RBCC) experiments to verify fluid dynamic code and application procedures. RBCC engine flowpath development will rely on CFD applications to capture the multi-dimensional fluid dynamic interactions and to quantify their effect on the RBCC system performance. Therefore, the accuracy of these CFD codes must be determined through detailed comparisons with test data. The PERC experiments build upon the well-known 1968 rocket-ejector experiments of Odegaard and Stroup by employing advanced optical and laser based diagnostics to evaluate mixing and secondary combustion. The Finite Difference Navier Stokes (FDNS) code was used to model the fluid dynamics of the PERC RBCC ejector mode configuration. Analyses were performed for both Diffusion and Afterburning (DAB) and Simultaneous Mixing and Combustion (SMC) test conditions. Results from both the 2D and the 3D models are presented.
NASA Technical Reports Server (NTRS)
2002-01-01
The NASA/Navy Benchmarking Exchange (NNBE) was undertaken to identify practices and procedures and to share lessons learned in the Navy's submarine and NASA's human space flight programs. The NNBE focus is on safety and mission assurance policies, processes, accountability, and control measures. This report is an interim summary of activity conducted through October 2002, and it coincides with completion of the first phase of a two-phase fact-finding effort.In August 2002, a team was formed, co-chaired by senior representatives from the NASA Office of Safety and Mission Assurance and the NAVSEA 92Q Submarine Safety and Quality Assurance Division. The team closely examined the two elements of submarine safety (SUBSAFE) certification: (1) new design/construction (initial certification) and (2) maintenance and modernization (sustaining certification), with a focus on: (1) Management and Organization, (2) Safety Requirements (technical and administrative), (3) Implementation Processes, (4) Compliance Verification Processes, and (5) Certification Processes.
Liu, Bin; Wu, Hao; Zhang, Deyuan; Wang, Xiaolong; Chou, Kuo-Chen
2017-02-21
To expedite the pace in conducting genome/proteome analysis, we have developed a Python package called Pse-Analysis. The powerful package can automatically complete the following five procedures: (1) sample feature extraction, (2) optimal parameter selection, (3) model training, (4) cross validation, and (5) evaluating prediction quality. All the work a user needs to do is to input a benchmark dataset along with the query biological sequences concerned. Based on the benchmark dataset, Pse-Analysis will automatically construct an ideal predictor, followed by yielding the predicted results for the submitted query samples. All the aforementioned tedious jobs can be automatically done by the computer. Moreover, the multiprocessing technique was adopted to enhance computational speed by about 6 folds. The Pse-Analysis Python package is freely accessible to the public at http://bioinformatics.hitsz.edu.cn/Pse-Analysis/, and can be directly run on Windows, Linux, and Unix.
Importance of inlet boundary conditions for numerical simulation of combustor flows
NASA Technical Reports Server (NTRS)
Sturgess, G. J.; Syed, S. A.; Mcmanus, K. R.
1983-01-01
Fluid dynamic computer codes for the mathematical simulation of problems in gas turbine engine combustion systems are required as design and diagnostic tools. To eventually achieve a performance standard with these codes of more than qualitative accuracy it is desirable to use benchmark experiments for validation studies. Typical of the fluid dynamic computer codes being developed for combustor simulations is the TEACH (Teaching Elliptic Axisymmetric Characteristics Heuristically) solution procedure. It is difficult to find suitable experiments which satisfy the present definition of benchmark quality. For the majority of the available experiments there is a lack of information concerning the boundary conditions. A standard TEACH-type numerical technique is applied to a number of test-case experiments. It is found that numerical simulations of gas turbine combustor-relevant flows can be sensitive to the plane at which the calculations start and the spatial distributions of inlet quantities for swirling flows.
Source-term development for a contaminant plume for use by multimedia risk assessment models
NASA Astrophysics Data System (ADS)
Whelan, Gene; McDonald, John P.; Taira, Randal Y.; Gnanapragasam, Emmanuel K.; Yu, Charley; Lew, Christine S.; Mills, William B.
2000-02-01
Multimedia modelers from the US Environmental Protection Agency (EPA) and US Department of Energy (DOE) are collaborating to conduct a comprehensive and quantitative benchmarking analysis of four intermedia models: MEPAS, MMSOILS, PRESTO, and RESRAD. These models represent typical analytically based tools that are used in human-risk and endangerment assessments at installations containing radioactive and hazardous contaminants. The objective is to demonstrate an approach for developing an adequate source term by simplifying an existing, real-world, 90Sr plume at DOE's Hanford installation in Richland, WA, for use in a multimedia benchmarking exercise between MEPAS, MMSOILS, PRESTO, and RESRAD. Source characteristics and a release mechanism are developed and described; also described is a typical process and procedure that an analyst would follow in developing a source term for using this class of analytical tool in a preliminary assessment.
A comparative study of upwind and MacCormack schemes for CAA benchmark problems
NASA Technical Reports Server (NTRS)
Viswanathan, K.; Sankar, L. N.
1995-01-01
In this study, upwind schemes and MacCormack schemes are evaluated as to their suitability for aeroacoustic applications. The governing equations are cast in a curvilinear coordinate system and discretized using finite volume concepts. A flux splitting procedure is used for the upwind schemes, where the signals crossing the cell faces are grouped into two categories: signals that bring information from outside into the cell, and signals that leave the cell. These signals may be computed in several ways, with the desired spatial and temporal accuracy achieved by choosing appropriate interpolating polynomials. The classical MacCormack schemes employed here are fourth order accurate in time and space. Results for categories 1, 4, and 6 of the workshop's benchmark problems are presented. Comparisons are also made with the exact solutions, where available. The main conclusions of this study are finally presented.
Basin-scale estimates of oceanic primary production by remote sensing - The North Atlantic
NASA Technical Reports Server (NTRS)
Platt, Trevor; Caverhill, Carla; Sathyendranath, Shubha
1991-01-01
The monthly averaged CZCS data for 1979 are used to estimate annual primary production at ocean basin scales in the North Atlantic. The principal supplementary data used were 873 vertical profiles of chlorophyll and 248 sets of parameters derived from photosynthesis-light experiments. Four different procedures were tested for calculation of primary production. The spectral model with nonuniform biomass was considered as the benchmark for comparison against the other three models. The less complete models gave results that differed by as much as 50 percent from the benchmark. Vertically uniform models tended to underestimate primary production by about 20 percent compared to the nonuniform models. At horizontal scale, the differences between spectral and nonspectral models were negligible. The linear correlation between biomass and estimated production was poor outside the tropics, suggesting caution against the indiscriminate use of biomass as a proxy variable for primary production.
Arnell, Magnus; Astals, Sergi; Åmand, Linda; Batstone, Damien J; Jensen, Paul D; Jeppsson, Ulf
2016-07-01
Anaerobic co-digestion is an emerging practice at wastewater treatment plants (WWTPs) to improve the energy balance and integrate waste management. Modelling of co-digestion in a plant-wide WWTP model is a powerful tool to assess the impact of co-substrate selection and dose strategy on digester performance and plant-wide effects. A feasible procedure to characterise and fractionate co-substrates COD for the Benchmark Simulation Model No. 2 (BSM2) was developed. This procedure is also applicable for the Anaerobic Digestion Model No. 1 (ADM1). Long chain fatty acid inhibition was included in the ADM1 model to allow for realistic modelling of lipid rich co-substrates. Sensitivity analysis revealed that, apart from the biodegradable fraction of COD, protein and lipid fractions are the most important fractions for methane production and digester stability, with at least two major failure modes identified through principal component analysis (PCA). The model and procedure were tested on bio-methane potential (BMP) tests on three substrates, each rich on carbohydrates, proteins or lipids with good predictive capability in all three cases. This model was then applied to a plant-wide simulation study which confirmed the positive effects of co-digestion on methane production and total operational cost. Simulations also revealed the importance of limiting the protein load to the anaerobic digester to avoid ammonia inhibition in the digester and overloading of the nitrogen removal processes in the water train. In contrast, the digester can treat relatively high loads of lipid rich substrates without prolonged disturbances. Copyright © 2016 Elsevier Ltd. All rights reserved.
Nouraei, S A R; Hudovsky, A; Frampton, A E; Mufti, U; White, N B; Wathen, C G; Sandhu, G S; Darzi, A
2015-06-01
Clinical coding is the translation of clinical activity into a coded language. Coded data drive hospital reimbursement and are used for audit and research, and benchmarking and outcomes management purposes. We undertook a 2-center audit of coding accuracy across surgery. Clinician-auditor multidisciplinary teams reviewed the coding of 30,127 patients and assessed accuracy at primary and secondary diagnosis and procedure levels, morbidity level, complications assignment, and financial variance. Postaudit data of a randomly selected sample of 400 cases were reaudited by an independent team. At least 1 coding change occurred in 15,402 patients (51%). There were 3911 (13%) and 3620 (12%) changes to primary diagnoses and procedures, respectively. In 5183 (17%) patients, the Health Resource Grouping changed, resulting in income variance of £3,974,544 (+6.2%). The morbidity level changed in 2116 (7%) patients (P < 0.001). The number of assigned complications rose from 2597 (8.6%) to 2979 (9.9%) (P < 0.001). Reaudit resulted in further primary diagnosis and procedure changes in 8.7% and 4.8% of patients, respectively. The coded data are a key engine for knowledge-driven health care provision. They are used, increasingly at individual surgeon level, to benchmark performance. Surgical clinical coding is prone to subjectivity, variability, and error (SVE). Having a specialty-by-specialty understanding of the nature and clinical significance of informatics variability and adopting strategies to reduce it, are necessary to allow accurate assumptions and informed decisions to be made concerning the scope and clinical applicability of administrative data in surgical outcomes improvement.
Benchmarking road safety performance: Identifying a meaningful reference (best-in-class).
Chen, Faan; Wu, Jiaorong; Chen, Xiaohong; Wang, Jianjun; Wang, Di
2016-01-01
For road safety improvement, comparing and benchmarking performance are widely advocated as the emerging and preferred approaches. However, there is currently no universally agreed upon approach for the process of road safety benchmarking, and performing the practice successfully is by no means easy. This is especially true for the two core activities of which: (1) developing a set of road safety performance indicators (SPIs) and combining them into a composite index; and (2) identifying a meaningful reference (best-in-class), one which has already obtained outstanding road safety practices. To this end, a scientific technique that can combine the multi-dimensional safety performance indicators (SPIs) into an overall index, and subsequently can identify the 'best-in-class' is urgently required. In this paper, the Entropy-embedded RSR (Rank-sum ratio), an innovative, scientific and systematic methodology is investigated with the aim of conducting the above two core tasks in an integrative and concise procedure, more specifically in a 'one-stop' way. Using a combination of results from other methods (e.g. the SUNflower approach) and other measures (e.g. Human Development Index) as a relevant reference, a given set of European countries are robustly ranked and grouped into several classes based on the composite Road Safety Index. Within each class the 'best-in-class' is then identified. By benchmarking road safety performance, the results serve to promote best practice, encourage the adoption of successful road safety strategies and measures and, more importantly, inspire the kind of political leadership needed to create a road transport system that maximizes safety. Copyright © 2015 Elsevier Ltd. All rights reserved.
Wallwiener, Markus; Brucker, Sara Y; Wallwiener, Diethelm
2012-06-01
This review summarizes the rationale for the creation of breast centres and discusses the studies conducted in Germany to obtain proof of principle for a voluntary, external benchmarking programme and proof of concept for third-party dual certification of breast centres and their mandatory quality management systems to the German Cancer Society (DKG) and German Society of Senology (DGS) Requirements of Breast Centres and ISO 9001 or similar. In addition, we report the most recent data on benchmarking and certification of breast centres in Germany. Review and summary of pertinent publications. Literature searches to identify additional relevant studies. Updates from the DKG/DGS programmes. Improvements in surrogate parameters as represented by structural and process quality indicators suggest that outcome quality is improving. The voluntary benchmarking programme has gained wide acceptance among DKG/DGS-certified breast centres. This is evidenced by early results from one of the largest studies in multidisciplinary cancer services research, initiated by the DKG and DGS to implement certified breast centres. The goal of establishing a nationwide network of certified breast centres in Germany can be considered largely achieved. Nonetheless the network still needs to be improved, and there is potential for optimization along the chain of care from mammography screening, interventional diagnosis and treatment through to follow-up. Specialization, guideline-concordant procedures as well as certification and recertification of breast centres remain essential to achieve further improvements in quality of breast cancer care and to stabilize and enhance the nationwide provision of high-quality breast cancer care.
ORBDA: An openEHR benchmark dataset for performance assessment of electronic health record servers.
Teodoro, Douglas; Sundvall, Erik; João Junior, Mario; Ruch, Patrick; Miranda Freire, Sergio
2018-01-01
The openEHR specifications are designed to support implementation of flexible and interoperable Electronic Health Record (EHR) systems. Despite the increasing number of solutions based on the openEHR specifications, it is difficult to find publicly available healthcare datasets in the openEHR format that can be used to test, compare and validate different data persistence mechanisms for openEHR. To foster research on openEHR servers, we present the openEHR Benchmark Dataset, ORBDA, a very large healthcare benchmark dataset encoded using the openEHR formalism. To construct ORBDA, we extracted and cleaned a de-identified dataset from the Brazilian National Healthcare System (SUS) containing hospitalisation and high complexity procedures information and formalised it using a set of openEHR archetypes and templates. Then, we implemented a tool to enrich the raw relational data and convert it into the openEHR model using the openEHR Java reference model library. The ORBDA dataset is available in composition, versioned composition and EHR openEHR representations in XML and JSON formats. In total, the dataset contains more than 150 million composition records. We describe the dataset and provide means to access it. Additionally, we demonstrate the usage of ORBDA for evaluating inserting throughput and query latency performances of some NoSQL database management systems. We believe that ORBDA is a valuable asset for assessing storage models for openEHR-based information systems during the software engineering process. It may also be a suitable component in future standardised benchmarking of available openEHR storage platforms.
ORBDA: An openEHR benchmark dataset for performance assessment of electronic health record servers
Sundvall, Erik; João Junior, Mario; Ruch, Patrick; Miranda Freire, Sergio
2018-01-01
The openEHR specifications are designed to support implementation of flexible and interoperable Electronic Health Record (EHR) systems. Despite the increasing number of solutions based on the openEHR specifications, it is difficult to find publicly available healthcare datasets in the openEHR format that can be used to test, compare and validate different data persistence mechanisms for openEHR. To foster research on openEHR servers, we present the openEHR Benchmark Dataset, ORBDA, a very large healthcare benchmark dataset encoded using the openEHR formalism. To construct ORBDA, we extracted and cleaned a de-identified dataset from the Brazilian National Healthcare System (SUS) containing hospitalisation and high complexity procedures information and formalised it using a set of openEHR archetypes and templates. Then, we implemented a tool to enrich the raw relational data and convert it into the openEHR model using the openEHR Java reference model library. The ORBDA dataset is available in composition, versioned composition and EHR openEHR representations in XML and JSON formats. In total, the dataset contains more than 150 million composition records. We describe the dataset and provide means to access it. Additionally, we demonstrate the usage of ORBDA for evaluating inserting throughput and query latency performances of some NoSQL database management systems. We believe that ORBDA is a valuable asset for assessing storage models for openEHR-based information systems during the software engineering process. It may also be a suitable component in future standardised benchmarking of available openEHR storage platforms. PMID:29293556
A Bayesian Multinomial Probit MODEL FOR THE ANALYSIS OF PANEL CHOICE DATA.
Fong, Duncan K H; Kim, Sunghoon; Chen, Zhe; DeSarbo, Wayne S
2016-03-01
A new Bayesian multinomial probit model is proposed for the analysis of panel choice data. Using a parameter expansion technique, we are able to devise a Markov Chain Monte Carlo algorithm to compute our Bayesian estimates efficiently. We also show that the proposed procedure enables the estimation of individual level coefficients for the single-period multinomial probit model even when the available prior information is vague. We apply our new procedure to consumer purchase data and reanalyze a well-known scanner panel dataset that reveals new substantive insights. In addition, we delineate a number of advantageous features of our proposed procedure over several benchmark models. Finally, through a simulation analysis employing a fractional factorial design, we demonstrate that the results from our proposed model are quite robust with respect to differing factors across various conditions.
Benditz, Achim; Greimel, Felix; Auer, Patrick; Zeman, Florian; Göttermann, Antje; Grifka, Joachim; Meissner, Winfried; von Kunow, Frederik
2016-01-01
Background The number of total hip replacement surgeries has steadily increased over recent years. Reduction in postoperative pain increases patient satisfaction and enables better mobilization. Thus, pain management needs to be continuously improved. Problems are often caused not only by medical issues but also by organization and hospital structure. The present study shows how the quality of pain management can be increased by implementing a standardized pain concept and simple, consistent, benchmarking. Methods All patients included in the study had undergone total hip arthroplasty (THA). Outcome parameters were analyzed 24 hours after surgery by means of the questionnaires from the German-wide project “Quality Improvement in Postoperative Pain Management” (QUIPS). A pain nurse interviewed patients and continuously assessed outcome quality parameters. A multidisciplinary team of anesthetists, orthopedic surgeons, and nurses implemented a regular procedure of data analysis and internal benchmarking. The health care team was informed of any results, and suggested improvements. Every staff member involved in pain management participated in educational lessons, and a special pain nurse was trained in each ward. Results From 2014 to 2015, 367 patients were included. The mean maximal pain score 24 hours after surgery was 4.0 (±3.0) on an 11-point numeric rating scale, and patient satisfaction was 9.0 (±1.2). Over time, the maximum pain score decreased (mean 3.0, ±2.0), whereas patient satisfaction significantly increased (mean 9.8, ±0.4; p<0.05). Among 49 anonymized hospitals, our clinic stayed on first rank in terms of lowest maximum pain and patient satisfaction over the period. Conclusion Results were already acceptable at the beginning of benchmarking a standardized pain management concept. But regular benchmarking, implementation of feedback mechanisms, and staff education made the pain management concept even more successful. Multidisciplinary teamwork and flexibility in adapting processes seem to be highly important for successful pain management. PMID:28031727
Benditz, Achim; Greimel, Felix; Auer, Patrick; Zeman, Florian; Göttermann, Antje; Grifka, Joachim; Meissner, Winfried; von Kunow, Frederik
2016-01-01
The number of total hip replacement surgeries has steadily increased over recent years. Reduction in postoperative pain increases patient satisfaction and enables better mobilization. Thus, pain management needs to be continuously improved. Problems are often caused not only by medical issues but also by organization and hospital structure. The present study shows how the quality of pain management can be increased by implementing a standardized pain concept and simple, consistent, benchmarking. All patients included in the study had undergone total hip arthroplasty (THA). Outcome parameters were analyzed 24 hours after surgery by means of the questionnaires from the German-wide project "Quality Improvement in Postoperative Pain Management" (QUIPS). A pain nurse interviewed patients and continuously assessed outcome quality parameters. A multidisciplinary team of anesthetists, orthopedic surgeons, and nurses implemented a regular procedure of data analysis and internal benchmarking. The health care team was informed of any results, and suggested improvements. Every staff member involved in pain management participated in educational lessons, and a special pain nurse was trained in each ward. From 2014 to 2015, 367 patients were included. The mean maximal pain score 24 hours after surgery was 4.0 (±3.0) on an 11-point numeric rating scale, and patient satisfaction was 9.0 (±1.2). Over time, the maximum pain score decreased (mean 3.0, ±2.0), whereas patient satisfaction significantly increased (mean 9.8, ±0.4; p <0.05). Among 49 anonymized hospitals, our clinic stayed on first rank in terms of lowest maximum pain and patient satisfaction over the period. Results were already acceptable at the beginning of benchmarking a standardized pain management concept. But regular benchmarking, implementation of feedback mechanisms, and staff education made the pain management concept even more successful. Multidisciplinary teamwork and flexibility in adapting processes seem to be highly important for successful pain management.
NASA Astrophysics Data System (ADS)
Gong, K.; Fritsch, D.
2018-05-01
Nowadays, multiple-view stereo satellite imagery has become a valuable data source for digital surface model generation and 3D reconstruction. In 2016, a well-organized multiple view stereo publicly benchmark for commercial satellite imagery has been released by the John Hopkins University Applied Physics Laboratory, USA. This benchmark motivates us to explore the method that can generate accurate digital surface models from a large number of high resolution satellite images. In this paper, we propose a pipeline for processing the benchmark data to digital surface models. As a pre-procedure, we filter all the possible image pairs according to the incidence angle and capture date. With the selected image pairs, the relative bias-compensated model is applied for relative orientation. After the epipolar image pairs' generation, dense image matching and triangulation, the 3D point clouds and DSMs are acquired. The DSMs are aligned to a quasi-ground plane by the relative bias-compensated model. We apply the median filter to generate the fused point cloud and DSM. By comparing with the reference LiDAR DSM, the accuracy, the completeness and the robustness are evaluated. The results show, that the point cloud reconstructs the surface with small structures and the fused DSM generated by our pipeline is accurate and robust.
"Best practice" in inflammatory bowel disease: an international survey and audit.
Van Der Eijk, Ingrid; Verheggen, Frank W.; Russel, Maurice G.; Buckley, Martin; Katsanos, Kostas; Munkholm, Pia; Engdahl, Ingemar; Politi, Patrizia; Odes, Selwyn; Fossen, Jan; Stockbrügger, Reinhold W.
2004-04-01
Background: An observational study was conducted at eight university and four district hospitals in eight countries collaborating in clinical and epidemiological research in inflammatory bowel disease (IBD) to compare European health care facilities and to define current "best practice" with regard to IBD. Methods: The approach used in this multi-national survey was unique. Existing quality norms, developed for total hospital care by a specialized organization, were restricted to IBD-specific care and adapted to the frame of reference of the study group. In each center, these norms were surveyed by means of questionnaires and professional audits in all participating centers. The collected data were reported to the center, compared to data from other hospitals, and used to benchmark. Group consensus was reached with regard to defining current "best practice". Results: The observations in each center involved patient-oriented processes, technical and patient safety, and quality of the medical standard. Several findings could be directly implemented to improve IBD care in another hospital (benchmarks). These included a confidential relationship between health care worker(s) and patients, and availability of patient data. Conclusions: The observed benchmarks, in combination with other subjectively chosen "positive" procedures, have been defined as current "best practice in IBD", representing practical guidelines towards better quality of care in IBD.
Teaching children the structure of science
NASA Astrophysics Data System (ADS)
Börner, Katy; Palmer, Fileve; Davis, Julie M.; Hardy, Elisha; Uzzo, Stephen M.; Hook, Bryan J.
2009-01-01
Maps of the world are common in classroom settings. They are used to teach the juxtaposition of natural and political functions, mineral resources, political, cultural and geographical boundaries; occurrences of processes such as tectonic drift; spreading of epidemics; and weather forecasts, among others. Recent work in scientometrics aims to create a map of science encompassing our collective scholarly knowledge. Maps of science can be used to see disciplinary boundaries; the origin of ideas, expertise, techniques, or tools; the birth, evolution, merging, splitting, and death of scientific disciplines; the spreading of ideas and technology; emerging research frontiers and bursts of activity; etc. Just like the first maps of our planet, the first maps of science are neither perfect nor correct. Today's science maps are predominantly generated based on English scholarly data: Techniques and procedures to achieve local and global accuracy of these maps are still being refined, and a visual language to communicate something as abstract and complex as science is still being developed. Yet, the maps are successfully used by institutions or individuals who can afford them to guide science policy decision making, economic decision making, or as visual interfaces to digital libraries. This paper presents the process and results of creating hands-on science maps for kids that teaches children ages 4-14 about the structure of scientific disciplines. The maps were tested in both formal and informal science education environments. The results show that children can easily transfer their (world) map and concept map reading skills to utilize maps of science in interesting ways.
Interface Technology for Geometrically Nonlinear Analysis of Multiple Connected Subdomains
NASA Technical Reports Server (NTRS)
Ransom, Jonathan B.
1997-01-01
Interface technology for geometrically nonlinear analysis is presented and demonstrated. This technology is based on an interface element which makes use of a hybrid variational formulation to provide for compatibility between independently modeled connected subdomains. The interface element developed herein extends previous work to include geometric nonlinearity and to use standard linear and nonlinear solution procedures. Several benchmark nonlinear applications of the interface technology are presented and aspects of the implementation are discussed.
Avoiding overstating the strength of forensic evidence: Shrunk likelihood ratios/Bayes factors.
Morrison, Geoffrey Stewart; Poh, Norman
2018-05-01
When strength of forensic evidence is quantified using sample data and statistical models, a concern may be raised as to whether the output of a model overestimates the strength of evidence. This is particularly the case when the amount of sample data is small, and hence sampling variability is high. This concern is related to concern about precision. This paper describes, explores, and tests three procedures which shrink the value of the likelihood ratio or Bayes factor toward the neutral value of one. The procedures are: (1) a Bayesian procedure with uninformative priors, (2) use of empirical lower and upper bounds (ELUB), and (3) a novel form of regularized logistic regression. As a benchmark, they are compared with linear discriminant analysis, and in some instances with non-regularized logistic regression. The behaviours of the procedures are explored using Monte Carlo simulated data, and tested on real data from comparisons of voice recordings, face images, and glass fragments. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
Representing and comparing protein structures as paths in three-dimensional space
Zhi, Degui; Krishna, S Sri; Cao, Haibo; Pevzner, Pavel; Godzik, Adam
2006-01-01
Background Most existing formulations of protein structure comparison are based on detailed atomic level descriptions of protein structures and bypass potential insights that arise from a higher-level abstraction. Results We propose a structure comparison approach based on a simplified representation of proteins that describes its three-dimensional path by local curvature along the generalized backbone of the polypeptide. We have implemented a dynamic programming procedure that aligns curvatures of proteins by optimizing a defined sum turning angle deviation measure. Conclusion Although our procedure does not directly optimize global structural similarity as measured by RMSD, our benchmarking results indicate that it can surprisingly well recover the structural similarity defined by structure classification databases and traditional structure alignment programs. In addition, our program can recognize similarities between structures with extensive conformation changes that are beyond the ability of traditional structure alignment programs. We demonstrate the applications of procedure to several contexts of structure comparison. An implementation of our procedure, CURVE, is available as a public webserver. PMID:17052359
How many records should be used in ASCE/SEI-7 ground motion scaling procedure?
Reyes, Juan C.; Kalkan, Erol
2012-01-01
U.S. national building codes refer to the ASCE/SEI-7 provisions for selecting and scaling ground motions for use in nonlinear response history analysis of structures. Because the limiting values for the number of records in the ASCE/SEI-7 are based on engineering experience, this study examines the required number of records statistically, such that the scaled records provide accurate, efficient, and consistent estimates of “true” structural responses. Based on elastic–perfectly plastic and bilinear single-degree-of-freedom systems, the ASCE/SEI-7 scaling procedure is applied to 480 sets of ground motions; the number of records in these sets varies from three to ten. As compared to benchmark responses, it is demonstrated that the ASCE/SEI-7 scaling procedure is conservative if fewer than seven ground motions are employed. Utilizing seven or more randomly selected records provides more accurate estimate of the responses. Selecting records based on their spectral shape and design spectral acceleration increases the accuracy and efficiency of the procedure.
NASA Astrophysics Data System (ADS)
Eto, S.; Nagai, S.; Tadokoro, K.
2011-12-01
Our group has developed a system for observing seafloor crustal deformation with a combination of acoustic ranging and kinematic GPS positioning techniques. One of the effective factors to reduce estimation error of submarine benchmark in our system is modeling variation of ocean acoustic velocity. We estimated various 1-dimensional velocity models with depth under some constraints, because it is difficult to estimate 3-dimensional acoustic velocity structure including temporal change due to our simple acquisition procedure of acoustic ranging data. We, then, applied the joint hypocenter determination method in seismology [Kissling et al., 1994] to acoustic ranging data. We assume two conditions as constraints in inversion procedure as follows: 1) fixed acoustic velocity in deeper part because it is usually stable both in space and time, 2) each inverted velocity model should be decreased with depth. The following two remarkable spatio-temporal changes of acoustic velocity 1) variations of travel-time residuals at the same points within short time and 2) larger differences between residuals at the neighboring points, which are one's of travel-time from different benchmarks. The First results cannot be explained only by the effect of atmospheric condition change including heating by sunlight. To verify the residual variations mentioned as the second result, we have performed forward modeling of acoustic ranging data with velocity models added velocity anomalies. We calculate travel time by a pseudo-bending ray tracing method [Um and Thurber, 1987] to examine effects of velocity anomaly on the travel-time differences. Comparison between these residuals and travel-time difference in forward modeling, velocity anomaly bodies in shallower depth can make these anomalous residuals, which may indicate moving water bodies. We need to apply an acoustic velocity structure model with velocity anomaly(s) in acoustic ranging data analysis and/or to develop a new system with a large number of sea surface stations to detect them, which may be able to reduce error of seafloor benchmarker position.
Approximate methods in gamma-ray skyshine calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faw, R.E.; Roseberry, M.L.; Shultis, J.K.
1985-11-01
Gamma-ray skyshine, an important component of the radiation field in the environment of a nuclear power plant, has recently been studied in relation to storage of spent fuel and nuclear waste. This paper reviews benchmark skyshine experiments and transport calculations against which computational procedures may be tested. The paper also addresses the applicability of simplified computational methods involving single-scattering approximations. One such method, suitable for microcomputer implementation, is described and results are compared with other work.
A novel approach to identifying regulatory motifs in distantly related genomes
Van Hellemont, Ruth; Monsieurs, Pieter; Thijs, Gert; De Moor, Bart; Van de Peer, Yves; Marchal, Kathleen
2005-01-01
Although proven successful in the identification of regulatory motifs, phylogenetic footprinting methods still show some shortcomings. To assess these difficulties, most apparent when applying phylogenetic footprinting to distantly related organisms, we developed a two-step procedure that combines the advantages of sequence alignment and motif detection approaches. The results on well-studied benchmark datasets indicate that the presented method outperforms other methods when the sequences become either too long or too heterogeneous in size. PMID:16420672
MCNP modelling of scintillation-detector gamma-ray spectra from natural radionuclides.
Hendriks, P H G M; Maucec, M; de Meijer, R J
2002-09-01
gamma-ray spectra of natural radionuclides are simulated for a BGO detector in a borehole geometry using the Monte Carlo code MCNP. All gamma-ray emissions of the decay of 40K and the series of 232Th and 238U are used to describe the source. A procedure is proposed which excludes the time-consuming electron tracking in less relevant areas of the geometry. The simulated gamma-ray spectra are benchmarked against laboratory data.
Time-Dependent Simulations of Incompressible Flow in a Turbopump Using Overset Grid Approach
NASA Technical Reports Server (NTRS)
Kiris, Cetin; Kwak, Dochan
2001-01-01
This viewgraph presentation provides information on mathematical modelling of the SSME (space shuttle main engine). The unsteady SSME-rig1 start-up procedure from the pump at rest has been initiated by using 34.3 million grid points. The computational model for the SSME-rig1 has been completed. Moving boundary capability is obtained by using DCF module in OVERFLOW-D. MPI (Message Passing Interface)/OpenMP hybrid parallel code has been benchmarked.
A new calibration code for the JET polarimeter.
Gelfusa, M; Murari, A; Gaudio, P; Boboc, A; Brombin, M; Orsitto, F P; Giovannozzi, E
2010-05-01
An equivalent model of JET polarimeter is presented, which overcomes the drawbacks of previous versions of the fitting procedures used to provide calibrated results. First of all the signal processing electronics has been simulated, to confirm that it is still working within the original specifications. Then the effective optical path of both the vertical and lateral chords has been implemented to produce the calibration curves. The principle approach to the model has allowed obtaining a unique procedure which can be applied to any manual calibration and remains constant until the following one. The optical model of the chords is then applied to derive the plasma measurements. The results are in good agreement with the estimates of the most advanced full wave propagation code available and have been benchmarked with other diagnostics. The devised procedure has proved to work properly also for the most recent campaigns and high current experiments.
Validating the applicability of the GUM procedure
NASA Astrophysics Data System (ADS)
Cox, Maurice G.; Harris, Peter M.
2014-08-01
This paper is directed at practitioners seeking a degree of assurance in the quality of the results of an uncertainty evaluation when using the procedure in the Guide to the Expression of Uncertainty in Measurement (GUM) (JCGM 100 : 2008). Such assurance is required in adhering to general standards such as International Standard ISO/IEC 17025 or other sector-specific standards. We investigate the extent to which such assurance can be given. For many practical cases, a measurement result incorporating an evaluated uncertainty that is correct to one significant decimal digit would be acceptable. Any quantification of the numerical precision of an uncertainty statement is naturally relative to the adequacy of the measurement model and the knowledge used of the quantities in that model. For general univariate and multivariate measurement models, we emphasize the use of a Monte Carlo method, as recommended in GUM Supplements 1 and 2. One use of this method is as a benchmark in terms of which measurement results provided by the GUM can be assessed in any particular instance. We mainly consider measurement models that are linear in the input quantities, or have been linearized and the linearization process is deemed to be adequate. When the probability distributions for those quantities are independent, we indicate the use of other approaches such as convolution methods based on the fast Fourier transform and, particularly, Chebyshev polynomials as benchmarks.
Security in Intelligent Transport Systems for Smart Cities: From Theory to Practice
Javed, Muhammad Awais; Ben Hamida, Elyes; Znaidi, Wassim
2016-01-01
Connecting vehicles securely and reliably is pivotal to the implementation of next generation ITS applications of smart cities. With continuously growing security threats, vehicles could be exposed to a number of service attacks that could put their safety at stake. To address this concern, both US and European ITS standards have selected Elliptic Curve Cryptography (ECC) algorithms to secure vehicular communications. However, there is still a lack of benchmarking studies on existing security standards in real-world settings. In this paper, we first analyze the security architecture of the ETSI ITS standard. We then implement the ECC based digital signature and encryption procedures using an experimental test-bed and conduct an extensive benchmark study to assess their performance which depends on factors such as payload size, processor speed and security levels. Using network simulation models, we further evaluate the impact of standard compliant security procedures in dense and realistic smart cities scenarios. Obtained results suggest that existing security solutions directly impact the achieved quality of service (QoS) and safety awareness of vehicular applications, in terms of increased packet inter-arrival delays, packet and cryptographic losses, and reduced safety awareness in safety applications. Finally, we summarize the insights gained from the simulation results and discuss open research challenges for efficient working of security in ITS applications of smart cities. PMID:27314358
A hybrid heuristic for the multiple choice multidimensional knapsack problem
NASA Astrophysics Data System (ADS)
Mansi, Raïd; Alves, Cláudio; Valério de Carvalho, J. M.; Hanafi, Saïd
2013-08-01
In this article, a new solution approach for the multiple choice multidimensional knapsack problem is described. The problem is a variant of the multidimensional knapsack problem where items are divided into classes, and exactly one item per class has to be chosen. Both problems are NP-hard. However, the multiple choice multidimensional knapsack problem appears to be more difficult to solve in part because of its choice constraints. Many real applications lead to very large scale multiple choice multidimensional knapsack problems that can hardly be addressed using exact algorithms. A new hybrid heuristic is proposed that embeds several new procedures for this problem. The approach is based on the resolution of linear programming relaxations of the problem and reduced problems that are obtained by fixing some variables of the problem. The solutions of these problems are used to update the global lower and upper bounds for the optimal solution value. A new strategy for defining the reduced problems is explored, together with a new family of cuts and a reformulation procedure that is used at each iteration to improve the performance of the heuristic. An extensive set of computational experiments is reported for benchmark instances from the literature and for a large set of hard instances generated randomly. The results show that the approach outperforms other state-of-the-art methods described so far, providing the best known solution for a significant number of benchmark instances.
A CWT-based methodology for piston slap experimental characterization
NASA Astrophysics Data System (ADS)
Buzzoni, M.; Mucchi, E.; Dalpiaz, G.
2017-03-01
Noise and vibration control in mechanical systems has become ever more significant for automotive industry where the comfort of the passenger compartment represents a challenging issue for car manufacturers. The reduction of piston slap noise is pivotal for a good design of IC engines. In this scenario, a methodology has been developed for the vibro-acoustic assessment of IC diesel engines by means of design changes in piston to cylinder bore clearance. Vibration signals have been analysed by means of advanced signal processing techniques taking advantage of cyclostationarity theory. The procedure departs from the analysis of the Continuous Wavelet Transform (CWT) in order to identify a representative frequency band of piston slap phenomenon. Such a frequency band has been exploited as the input data in the further signal processing analysis that involves the envelope analysis of the second order cyclostationary component of the signal. The second order harmonic component has been used as the benchmark parameter of piston slap noise. An experimental procedure of vibrational benchmarking is proposed and verified at different operational conditions in real IC engines actually equipped on cars. This study clearly underlines the crucial role of the transducer positioning when differences among real piston-to-cylinder clearances are considered. In particular, the proposed methodology is effective for the sensors placed on the outer cylinder wall in all the tested conditions.
Validation of the WIMSD4M cross-section generation code with benchmark results
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leal, L.C.; Deen, J.R.; Woodruff, W.L.
1995-02-01
The WIMSD4 code has been adopted for cross-section generation in support of the Reduced Enrichment for Research and Test (RERTR) program at Argonne National Laboratory (ANL). Subsequently, the code has undergone several updates, and significant improvements have been achieved. The capability of generating group-collapsed micro- or macroscopic cross sections from the ENDF/B-V library and the more recent evaluation, ENDF/B-VI, in the ISOTXS format makes the modified version of the WIMSD4 code, WIMSD4M, very attractive, not only for the RERTR program, but also for the reactor physics community. The intent of the present paper is to validate the procedure to generatemore » cross-section libraries for reactor analyses and calculations utilizing the WIMSD4M code. To do so, the results of calculations performed with group cross-section data generated with the WIMSD4M code will be compared against experimental results. These results correspond to calculations carried out with thermal reactor benchmarks of the Oak Ridge National Laboratory(ORNL) unreflected critical spheres, the TRX critical experiments, and calculations of a modified Los Alamos highly-enriched heavy-water moderated benchmark critical system. The benchmark calculations were performed with the discrete-ordinates transport code, TWODANT, using WIMSD4M cross-section data. Transport calculations using the XSDRNPM module of the SCALE code system are also included. In addition to transport calculations, diffusion calculations with the DIF3D code were also carried out, since the DIF3D code is used in the RERTR program for reactor analysis and design. For completeness, Monte Carlo results of calculations performed with the VIM and MCNP codes are also presented.« less
Staffing benchmarks for histology laboratories.
Buesa, René J
2010-06-01
This article summarizes annual workloads for staff positions and work flow productivity (WFP) values from 247 human pathology, 31 veterinary, and 35 forensic histology laboratories (histolabs). There are single summaries for veterinary and forensic histolabs, but the data from human pathology are divided into 2 groups because of statistically significant differences between those from Spain and 6 Hispano American countries (SpHA) and the rest from the United States and 17 other countries. The differences reflect the way the work is organized, but the histotechnicians and histotechnologists (histotechs) from SpHA have the same task productivity levels as those from any other country (Buesa RJ. Productivity standards for histology laboratories. [YADPA 50,552]). The information is also segregated by groups of histolabs with increasing workloads; this aspect also showed statistical differences. The information from human pathology histolabs other than those from SpHA were used to calculate staffing annual benchmarks for pathologists (from 3700 to 6500 cases depending on the histolab annual workload), pathology assistants (20,000 cases), staff histotechs (9900 blocks), cutting histotechs (15,000 blocks), histotechs doing special procedures (9500 slides if done manually or 15,000 slides with autostainers), dieners (100 autopsies), laboratory aides and transcriptionists (15,000 cases each), and secretaries (20,000 cases). There are also recommendations about workload limits for supervisory staff (lead techs and supervisors) and when neither is required. Each benchmark was related with the productivity of the different tasks they include (Buesa RJ. Productivity standards for histology laboratories. [YADPA 50,552]) to calculate the hours per year required to complete them. The relationship between workload and benchmarks allows the director of pathology to determine the staff needed for the efficient operation of the histolab.
Siregar, S; Pouw, M E; Moons, K G M; Versteegh, M I M; Bots, M L; van der Graaf, Y; Kalkman, C J; van Herwerden, L A; Groenwold, R H H
2014-01-01
Objective To compare the accuracy of data from hospital administration databases and a national clinical cardiac surgery database and to compare the performance of the Dutch hospital standardised mortality ratio (HSMR) method and the logistic European System for Cardiac Operative Risk Evaluation, for the purpose of benchmarking of mortality across hospitals. Methods Information on all patients undergoing cardiac surgery between 1 January 2007 and 31 December 2010 in 10 centres was extracted from The Netherlands Association for Cardio-Thoracic Surgery database and the Hospital Discharge Registry. The number of cardiac surgery interventions was compared between both databases. The European System for Cardiac Operative Risk Evaluation and hospital standardised mortality ratio models were updated in the study population and compared using the C-statistic, calibration plots and the Brier-score. Results The number of cardiac surgery interventions performed could not be assessed using the administrative database as the intervention code was incorrect in 1.4–26.3%, depending on the type of intervention. In 7.3% no intervention code was registered. The updated administrative model was inferior to the updated clinical model with respect to discrimination (c-statistic of 0.77 vs 0.85, p<0.001) and calibration (Brier Score of 2.8% vs 2.6%, p<0.001, maximum score 3.0%). Two average performing hospitals according to the clinical model became outliers when benchmarking was performed using the administrative model. Conclusions In cardiac surgery, administrative data are less suitable than clinical data for the purpose of benchmarking. The use of either administrative or clinical risk-adjustment models can affect the outlier status of hospitals. Risk-adjustment models including procedure-specific clinical risk factors are recommended. PMID:24334377
The financial implications of endovascular aneurysm repair in the cost containment era.
Stone, David H; Horvath, Alexander J; Goodney, Philip P; Rzucidlo, Eva M; Nolan, Brian W; Walsh, Daniel B; Zwolak, Robert M; Powell, Richard J
2014-02-01
Endovascular aneurysm repair (EVAR) is associated with significant direct device costs. Such costs place EVAR at odds with efforts to constrain healthcare expenditures. This study examines the procedure-associated costs and operating margins associated with EVAR at a tertiary care academic medical center. All infrarenal EVARs performed from April 2011 to March 2012 were identified (n = 127). Among this cohort, 49 patients met standard commercial instruction for use guidelines, were treated using a single manufacturer device, and billed to Medicare diagnosis-related group (DRG) 238. Of these 49 patients, net technical operating margins (technical revenue minus technical cost) were calculated in conjunction with the hospital finance department. EVAR implant costs were determined for each procedure. DRG 238-associated costs and length of stay were benchmarked against other academic medical centers using University Health System Consortium 2012 data. Among the studied EVAR cohort (age 75, 82% male, mean length of stay, 1.7 days), mean technical costs totaled $31,672. Graft implants accounted for 52% of the allocated technical costs. Institutional overhead was 17% ($5495) of total technical costs. Net mean total technical EVAR-associated operating margins were -$4015 per procedure. Our institutional costs and length of stay, when benchmarked against comparable centers, remained in the lowest quartile nationally using University Health System Consortium costs for DRG 238. Stent graft price did not correlate with total EVAR market share. EVAR is currently associated with significant negative operating margins among Medicare beneficiaries. Currently, device costs account for over 50% of EVAR-associated technical costs and did not impact EVAR market share, reflecting an unawareness of cost differential among surgeons. These data indicate that EVAR must undergo dramatic care delivery redesign for this practice to remain sustainable. Copyright © 2014 Society for Vascular Surgery. Published by Mosby, Inc. All rights reserved.
State Variation in Medicaid Reimbursements for Orthopaedic Surgery.
Lalezari, Ramin M; Pozen, Alexis; Dy, Christopher J
2018-02-07
Medicaid reimbursements are determined by each state and are subject to variability. We sought to quantify this variation for commonly performed inpatient orthopaedic procedures. The 10 most commonly performed inpatient orthopaedic procedures, as ranked by the Healthcare Cost and Utilization Project (HCUP) National Inpatient Sample, were identified for study. Medicaid reimbursement amounts for those procedures were benchmarked to state Medicare reimbursement amounts in 3 ways: (1) ratio, (2) dollar difference, and (3) dollar difference divided by the relative value unit (RVU) amount. Variability was quantified by determining the range and coefficient of variation for those reimbursement amounts. The range of variability of Medicaid reimbursements among states exceeded $1,500 for all 10 procedures. The coefficients of variation ranged from 0.32 (hip hemiarthroplasty) to 0.57 (posterior or posterolateral lumbar interbody arthrodesis) (a higher coefficient indicates greater variability), compared with 0.07 for Medicare reimbursements for all 10 procedures. Adjusted as a dollar difference between Medicaid and Medicare per RVU, the median values ranged from -$8/RVU (total knee arthroplasty) to -$17/RVU (open reduction and internal fixation of the femur). Variability of Medicaid reimbursement for inpatient orthopaedic procedures among states is substantial. This variation becomes especially remarkable given recent policy shifts toward focusing reimbursements on value.
Ambulatory Surgery Centers and Prices in Hospital Outpatient Departments.
Carey, Kathleen
2017-04-01
Specialty providers claim to offer a new competitive benchmark for efficient delivery of health care. This article explores this view by examining evidence for price competition between ambulatory surgery centers (ASCs) and hospital outpatient departments (HOPDs). I studied the impact of ASC market presence on actual prices paid to HOPDs during 2007-2010 for four common surgical procedures that were performed in both provider types. For the procedures examined, HOPDs received payments from commercial insurers in the range of 3.25% to 5.15% lower for each additional ASC per 100,000 persons in a market. HOPDs may have less negotiating leverage with commercial insurers on price in markets with high ASC market penetration, resulting in relatively lower prices.
Coupling of Multiple Coulomb Scattering with Energy Loss and Straggling in HZETRN
NASA Technical Reports Server (NTRS)
Mertens, Christopher J.; Wilson, John W.; Walker, Steven A.; Tweed, John
2007-01-01
The new version of the HZETRN deterministic transport code based on Green's function methods, and the incorporation of ground-based laboratory boundary conditions, has lead to the development of analytical and numerical procedures to include off-axis dispersion of primary ion beams due to small-angle multiple Coulomb scattering. In this paper we present the theoretical formulation and computational procedures to compute ion beam broadening and a methodology towards achieving a self-consistent approach to coupling multiple scattering interactions with ionization energy loss and straggling. Our initial benchmark case is a 60 MeV proton beam on muscle tissue, for which we can compare various attributes of beam broadening with Monte Carlo simulations reported in the open literature.
Geographic variation in lumbar diskectomy: a protocol for evaluation.
Barron, M; Kazandjian, V A
1992-03-01
In 1989 the Maryland Hospital Association (MHA) began developing a protocol related to lumbar diskectomy, a procedure with widely reported geographic variation in its use. The MHA's Laminectomy Advisory Committee drafted three criteria for performance of lumbar diskectomy and also developed a data-collection instrument with which the eight hospitals participating in a pilot study could abstract the necessary data from medical records. Both individual hospital and aggregate results showed wide variation in compliance with the criteria. These findings suggest research and development activities such as refinement of the data-collection instrument, use of the protocol for bench-marking, further investigation of clinical and other determinants of rate variation, and study of the effect of new diagnostic technology on utilization rates for this procedure.
von Eiff, Wilfried
2015-01-01
Hospitals worldwide are facing the same opportunities and threats: the demographics of an aging population; steady increases in chronic diseases and severe illnesses; and a steadily increasing demand for medical services with more intensive treatment for multi-morbid patients. Additionally, patients are becoming more demanding. They expect high quality medicine within a dignity-driven and painless healing environment. The severe financial pressures that these developments entail oblige care providers to more and more cost-containment and to apply process reengineering, as well as continuous performance improvement measures, so as to achieve future financial sustainability. At the same time, regulators are calling for improved patient outcomes. Benchmarking and best practice management are successfully proven performance improvement tools for enabling hospitals to achieve a higher level of clinical output quality, enhanced patient satisfaction, and care delivery capability, while simultaneously containing and reducing costs. This chapter aims to clarify what benchmarking is and what it is not. Furthermore, it is stated that benchmarking is a powerful managerial tool for improving decision-making processes that can contribute to the above-mentioned improvement measures in health care delivery. The benchmarking approach described in this chapter is oriented toward the philosophy of an input-output model and is explained based on practical international examples from different industries in various countries. Benchmarking is not a project with a defined start and end point, but a continuous initiative of comparing key performance indicators, process structures, and best practices from best-in-class companies inside and outside industry. Benchmarking is an ongoing process of measuring and searching for best-in-class performance: Measure yourself with yourself over time against key performance indicators. Measure yourself against others. Identify best practices. Equal or exceed this best practice in your institution. Focus on simple and effective ways to implement solutions. Comparing only figures, such as average length of stay, costs of procedures, infection rates, or out-of-stock rates, can lead easily to wrong conclusions and decision making with often-disastrous consequences. Just looking at figures and ratios is not the basis for detecting potential excellence. It is necessary to look beyond the numbers to understand how processes work and contribute to best-in-class results. Best practices from even quite different industries can enable hospitals to leapfrog results in patient orientation, clinical excellence, and cost-effectiveness. Despite common benchmarking approaches, it is pointed out that a comparison without "looking behind the figures" (what it means to be familiar with the process structure, process dynamic and drivers, process institutions/rules and process-related incentive components) will be extremely limited referring to reliability and quality of findings. In order to demonstrate transferability of benchmarking results between different industries practical examples from health care, automotive, and hotel service have been selected. Additionally, it is depicted that international comparisons between hospitals providing medical services in different health care systems do have a great potential for achieving leapfrog results in medical quality, organization of service provision, effective work structures, purchasing and logistics processes, or management, etc.
Open-source platform to benchmark fingerprints for ligand-based virtual screening
2013-01-01
Similarity-search methods using molecular fingerprints are an important tool for ligand-based virtual screening. A huge variety of fingerprints exist and their performance, usually assessed in retrospective benchmarking studies using data sets with known actives and known or assumed inactives, depends largely on the validation data sets used and the similarity measure used. Comparing new methods to existing ones in any systematic way is rather difficult due to the lack of standard data sets and evaluation procedures. Here, we present a standard platform for the benchmarking of 2D fingerprints. The open-source platform contains all source code, structural data for the actives and inactives used (drawn from three publicly available collections of data sets), and lists of randomly selected query molecules to be used for statistically valid comparisons of methods. This allows the exact reproduction and comparison of results for future studies. The results for 12 standard fingerprints together with two simple baseline fingerprints assessed by seven evaluation methods are shown together with the correlations between methods. High correlations were found between the 12 fingerprints and a careful statistical analysis showed that only the two baseline fingerprints were different from the others in a statistically significant way. High correlations were also found between six of the seven evaluation methods, indicating that despite their seeming differences, many of these methods are similar to each other. PMID:23721588
NASA Astrophysics Data System (ADS)
Hu, Qiang
2017-09-01
We develop an approach of the Grad-Shafranov (GS) reconstruction for toroidal structures in space plasmas, based on in situ spacecraft measurements. The underlying theory is the GS equation that describes two-dimensional magnetohydrostatic equilibrium, as widely applied in fusion plasmas. The geometry is such that the arbitrary cross-section of the torus has rotational symmetry about the rotation axis, Z, with a major radius, r0. The magnetic field configuration is thus determined by a scalar flux function, Ψ, and a functional F that is a single-variable function of Ψ. The algorithm is implemented through a two-step approach: i) a trial-and-error process by minimizing the residue of the functional F(Ψ) to determine an optimal Z-axis orientation, and ii) for the chosen Z, a χ2 minimization process resulting in a range of r0. Benchmark studies of known analytic solutions to the toroidal GS equation with noise additions are presented to illustrate the two-step procedure and to demonstrate the performance of the numerical GS solver, separately. For the cases presented, the errors in Z and r0 are 9° and 22%, respectively, and the relative percent error in the numerical GS solutions is smaller than 10%. We also make public the computer codes for these implementations and benchmark studies.
FDNS CFD Code Benchmark for RBCC Ejector Mode Operation: Continuing Toward Dual Rocket Effects
NASA Technical Reports Server (NTRS)
West, Jeff; Ruf, Joseph H.; Turner, James E. (Technical Monitor)
2000-01-01
Computational Fluid Dynamics (CFD) analysis results are compared with benchmark quality test data from the Propulsion Engineering Research Center's (PERC) Rocket Based Combined Cycle (RBCC) experiments to verify fluid dynamic code and application procedures. RBCC engine flowpath development will rely on CFD applications to capture the multi -dimensional fluid dynamic interactions and to quantify their effect on the RBCC system performance. Therefore, the accuracy of these CFD codes must be determined through detailed comparisons with test data. The PERC experiments build upon the well-known 1968 rocket-ejector experiments of Odegaard and Stroup by employing advanced optical and laser based diagnostics to evaluate mixing and secondary combustion. The Finite Difference Navier Stokes (FDNS) code [2] was used to model the fluid dynamics of the PERC RBCC ejector mode configuration. Analyses were performed for the Diffusion and Afterburning (DAB) test conditions at the 200-psia thruster operation point, Results with and without downstream fuel injection are presented.
[Data supporting quality circle management of inpatient depression treatment].
Brand, S; Härter, M; Sitta, P; van Calker, D; Menke, R; Heindl, A; Herold, K; Kudling, R; Luckhaus, C; Rupprecht, U; Sanner, Dirk; Schmitz, D; Schramm, E; Berger, M; Gaebel, W; Schneider, F
2005-07-01
Several quality assurance initiatives in health care have been undertaken during the past years. The next step consists of systematically combining single initiatives in order to built up a strategic quality management. In a German multicenter study, the quality of inpatient depression treatment was measured in ten psychiatric hospitals. Half of the hospitals received comparative feedback on their individual results in comparison to the other hospitals (bench marking). Those bench markings were used by each hospital as a statistic basis for in-house quality work, to improve the quality of depression treatment. According to hospital differences concerning procedure and outcome, different goals were chosen. There were also differences with respect to structural characteristics, strategies, and outcome. The feedback from participants about data-based quality circles in general and the availability of bench-marking data was positive. The necessity of carefully choosing quality circle members and professional moderation became obvious. Data-based quality circles including bench-marking have proven to be useful for quality management in inpatient depression care.
Lim, Wee Loon; Wibowo, Antoni; Desa, Mohammad Ishak; Haron, Habibollah
2016-01-01
The quadratic assignment problem (QAP) is an NP-hard combinatorial optimization problem with a wide variety of applications. Biogeography-based optimization (BBO), a relatively new optimization technique based on the biogeography concept, uses the idea of migration strategy of species to derive algorithm for solving optimization problems. It has been shown that BBO provides performance on a par with other optimization methods. A classical BBO algorithm employs the mutation operator as its diversification strategy. However, this process will often ruin the quality of solutions in QAP. In this paper, we propose a hybrid technique to overcome the weakness of classical BBO algorithm to solve QAP, by replacing the mutation operator with a tabu search procedure. Our experiments using the benchmark instances from QAPLIB show that the proposed hybrid method is able to find good solutions for them within reasonable computational times. Out of 61 benchmark instances tested, the proposed method is able to obtain the best known solutions for 57 of them. PMID:26819585
NASA Astrophysics Data System (ADS)
Havemann, Frank; Heinz, Michael; Struck, Alexander; Gläser, Jochen
2011-01-01
We propose a new local, deterministic and parameter-free algorithm that detects fuzzy and crisp overlapping communities in a weighted network and simultaneously reveals their hierarchy. Using a local fitness function, the algorithm greedily expands natural communities of seeds until the whole graph is covered. The hierarchy of communities is obtained analytically by calculating resolution levels at which communities grow rather than numerically by testing different resolution levels. This analytic procedure is not only more exact than its numerical alternatives such as LFM and GCE but also much faster. Critical resolution levels can be identified by searching for intervals in which large changes of the resolution do not lead to growth of communities. We tested our algorithm on benchmark graphs and on a network of 492 papers in information science. Combined with a specific post-processing, the algorithm gives much more precise results on LFR benchmarks with high overlap compared to other algorithms and performs very similarly to GCE.
Lim, Wee Loon; Wibowo, Antoni; Desa, Mohammad Ishak; Haron, Habibollah
2016-01-01
The quadratic assignment problem (QAP) is an NP-hard combinatorial optimization problem with a wide variety of applications. Biogeography-based optimization (BBO), a relatively new optimization technique based on the biogeography concept, uses the idea of migration strategy of species to derive algorithm for solving optimization problems. It has been shown that BBO provides performance on a par with other optimization methods. A classical BBO algorithm employs the mutation operator as its diversification strategy. However, this process will often ruin the quality of solutions in QAP. In this paper, we propose a hybrid technique to overcome the weakness of classical BBO algorithm to solve QAP, by replacing the mutation operator with a tabu search procedure. Our experiments using the benchmark instances from QAPLIB show that the proposed hybrid method is able to find good solutions for them within reasonable computational times. Out of 61 benchmark instances tested, the proposed method is able to obtain the best known solutions for 57 of them.
Investigation of the transient fuel preburner manifold and combustor
NASA Technical Reports Server (NTRS)
Wang, Ten-See; Chen, Yen-Sen; Farmer, Richard C.
1989-01-01
A computational fluid dynamics (CFD) model with finite rate reactions, FDNS, was developed to study the start transient of the Space Shuttle Main Engine (SSME) fuel preburner (FPB). FDNS is a time accurate, pressure based CFD code. An upwind scheme was employed for spatial discretization. The upwind scheme was based on second and fourth order central differencing with adaptive artificial dissipation. A state of the art two-equation k-epsilon (T) turbulence model was employed for the turbulence calculation. A Pade' Rational Solution (PARASOL) chemistry algorithm was coupled with the point implicit procedure. FDNS was benchmarked with three well documented experiments: a confined swirling coaxial jet, a non-reactive ramjet dump combustor, and a reactive ramjet dump combustor. Excellent comparisons were obtained for the benchmark cases. The code was then used to study the start transient of an axisymmetric SSME fuel preburner. Predicted transient operation of the preburner agrees well with experiment. Furthermore, it was also found that an appreciable amount of unburned oxygen entered the turbine stages.
Monitoring land subsidence in Sacramento Valley, California, using GPS
Blodgett, J.C.; Ikehara, M.E.; Williams, Gary E.
1990-01-01
Land subsidence measurement is usually based on a comparison of bench-mark elevations surveyed at different times. These bench marks, established for mapping or the national vertical control network, are not necessarily suitable for measuring land subsidence. Also, many bench marks have been destroyed or are unstable. Conventional releveling of the study area would be costly and would require several years to complete. Differences of as much as 3.9 ft between recent leveling and published bench-mark elevations have been documented at seven locations in the Sacramento Valley. Estimates of land subsidence less than about 0.3 ft are questionable because elevation data are based on leveling and adjustment procedures that occured over many years. A new vertical control network based on the Global Positioning System (GPS) provides highly accurate vertical control data at relatively low costs, and the survey points can be placed where needed to obtain adequate areal coverage of the area affected by land subsidence.
Supply network configuration—A benchmarking problem
NASA Astrophysics Data System (ADS)
Brandenburg, Marcus
2018-03-01
Managing supply networks is a highly relevant task that strongly influences the competitiveness of firms from various industries. Designing supply networks is a strategic process that considerably affects the structure of the whole network. In contrast, supply networks for new products are configured without major adaptations of the existing structure, but the network has to be configured before the new product is actually launched in the marketplace. Due to dynamics and uncertainties, the resulting planning problem is highly complex. However, formal models and solution approaches that support supply network configuration decisions for new products are scant. The paper at hand aims at stimulating related model-based research. To formulate mathematical models and solution procedures, a benchmarking problem is introduced which is derived from a case study of a cosmetics manufacturer. Tasks, objectives, and constraints of the problem are described in great detail and numerical values and ranges of all problem parameters are given. In addition, several directions for future research are suggested.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dourson, M.L.
The quantitative procedures associated with noncancer risk assessment include reference dose (RfD), benchmark dose, and severity modeling. The RfD, which is part of the EPA risk assessment guidelines, is an estimation of a level that is likely to be without any health risk to sensitive individuals. The RfD requires two major judgments: the first is choice of a critical effect(s) and its No Observed Adverse Effect Level (NOAEL); the second judgment is choice of an uncertainty factor. This paper discusses major assumptions and limitations of the RfD model.
ERIC Educational Resources Information Center
Tillema, Marion; van den Bergh, Huub; Rijlaarsdam, Gert; Sanders, Ted
2013-01-01
It is the consensus that, as a result of the extra constraints placed on working memory, texts written in a second language (L2) are usually of lower quality than texts written in the first language (L1) by the same writer. However, no method is currently available for quantifying the quality difference between L1 and L2 texts. In the present…
ERIC Educational Resources Information Center
Burkhart, Joyce
St. Petersburg Junior College (SPJC), Florida, identified critical issues in e-learning practices and posed six questions in order to formulate an evaluation process. SPJC considered one question per quarter for 18 months. The questions were reviewed using the following steps: (1) examine best e-learning practices related to that question, using…
Development of a Compound Optimization Approach Based on Imperialist Competitive Algorithm
NASA Astrophysics Data System (ADS)
Wang, Qimei; Yang, Zhihong; Wang, Yong
In this paper, an improved novel approach is developed for the imperialist competitive algorithm to achieve a greater performance. The Nelder-Meand simplex method is applied to execute alternately with the original procedures of the algorithm. The approach is tested on twelve widely-used benchmark functions and is also compared with other relative studies. It is shown that the proposed approach has a faster convergence rate, better search ability, and higher stability than the original algorithm and other relative methods.
Validation of a Low-Thrust Mission Design Tool Using Operational Navigation Software
NASA Technical Reports Server (NTRS)
Englander, Jacob A.; Knittel, Jeremy M.; Williams, Ken; Stanbridge, Dale; Ellison, Donald H.
2017-01-01
Design of flight trajectories for missions employing solar electric propulsion requires a suitably high-fidelity design tool. In this work, the Evolutionary Mission Trajectory Generator (EMTG) is presented as a medium-high fidelity design tool that is suitable for mission proposals. EMTG is validated against the high-heritage deep-space navigation tool MIRAGE, demonstrating both the accuracy of EMTG's model and an operational mission design and navigation procedure using both tools. The validation is performed using a benchmark mission to the Jupiter Trojans.
Benchmarking reference services: step by step.
Buchanan, H S; Marshall, J G
1996-01-01
This article is a companion to an introductory article on benchmarking published in an earlier issue of Medical Reference Services Quarterly. Librarians interested in benchmarking often ask the following questions: How do I determine what to benchmark; how do I form a benchmarking team; how do I identify benchmarking partners; what's the best way to collect and analyze benchmarking information; and what will I do with the data? Careful planning is a critical success factor of any benchmarking project, and these questions must be answered before embarking on a benchmarking study. This article summarizes the steps necessary to conduct benchmarking research. Relevant examples of each benchmarking step are provided.
Influenza: a scientometric and density-equalizing analysis
2013-01-01
Background Novel influenza in 2009 caused by H1N1, as well as the seasonal influenza, still are a challenge for the public health sectors worldwide. An increasing number of publications referring to this infectious disease make it difficult to distinguish relevant research output. The current study used scientometric indices for a detailed investigation on influenza related research activity and the method of density equalizing mapping to make the differences of the overall research worldwide obvious. The aim of the study was to compare scientific effort over the time as well as geographical distribution including the cooperation on national and international level. Methods Therefore, publication data was retrieved from Web of Science (WoS) of Thomson Scientific. Subsequently the data was analysed in order to show geographical distributions and the development of the research output over the time. The query retrieved 51,418 publications that are listed in WoS for the time interval from 1900 to 2009. There is a continuous increase in research output and general citation activity especially since 1990. Results The identified all in all 51,418 publications were published by researchers from 151 different countries. Scientists from the USA participate in more than 37 percent of all publications, followed by researchers from the UK and Germany with more than five percent. In addition, the USA is in the focus of international cooperation. In terms of number of publications on influenza, the Journal of Virology ranks first, followed by Vaccine and Virology. The highest impact factor (IF 2009) in this selection can be established for The Lancet (30.75). Robert Webster seems to be the most prolific author contributing the most publications in the field of influenza. Conclusions This study reveals an increasing and wide research interest in influenza. Nevertheless, citation based-declaration of scientific quality should be considered critically due to distortion by self-citation and co-authorship. PMID:24079616
Patient safety: the landscape of the global research output and gender distribution.
Schreiber, Moritz; Klingelhöfer, Doris; Groneberg, David A; Brüggmann, Doerthe
2016-02-12
Patient safety is a crucial issue in medicine. Its main objective is to reduce the number of deaths and health damages that are caused by preventable medical errors. To achieve this, it needs better health systems that make mistakes less likely and their effects less detrimental without blaming health workers for failures. Until now, there is no in-depth scientometric analysis on this issue that encompasses the interval between 1963 and 2014. Therefore, the aim of this study is to sketch a landscape of the past global research output on patient safety including the gender distribution of the medical discipline of patient safety by interpreting scientometric parameters. Additionally, respective future trends are to be outlined. The Core Collection of the scientific database Web of Science was searched for publications with the search term 'Patient Safety' as title word that was focused on the corresponding medical discipline. The resulting data set was analysed by using the methodology implemented by the platform NewQIS. To visualise the geographical landscape, state-of-the-art techniques including density-equalising map projections were applied. 4079 articles on patient safety were identified in the period from 1900 to 2014. Most articles were published in North America, the UK and Australia. In regard to the overall number of publications, the USA is the leading country, while the output ratio to the population of Switzerland was found to exhibit the best performance. With regard to the ratio of the number of publications to the Gross Domestic Product (GDP) per Capita, the USA remains the leading nation but countries like India and China with a low GDP and high population numbers are also profiting. Though the topic is a global matter, the scientific output on patient safety is centred mainly in industrialised countries. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Siamaki, Saba; Geraei, Ehsan; Zare- Farashbandi, Firoozeh
2014-01-01
Background: Scientific collaboration is among the most important subjects in scientometrics, and many studies have investigated this concept to this day. The goal of the current study is investigation of scientific collaboration and co-authorship patterns of researchers in the field of library and information science in Iran between years 2005 and 2009. Materials and Methods: The current study uses scientometrics method. The statistical population consists of 942 documents published in Iranian library and information science journals between years 2005 and 2009. Collaboration coefficient, collaboration index (CI), and degree of collaboration (DC) were used for data analysis. Findings: The findings showed that among 942 investigated documents, 506 documents (53.70%) was created by one individual researcher and 436 documents (46.30%) were the result of collaboration between two or more researchers. Also, the highest rank of different authorship patterns belonged to National Journal of Librarianship and Information Organization (code H). Conclusion: The average collaboration coefficient for the library and information science researchers in the investigated time frame was 0.23. The closer this coefficient is to 1, the higher is the level of collaboration between authors, and a coefficient near zero shows a tendency to prefer individual articles. The highest collaboration index with an average of 1.92 authors per paper was seen in year 1388. The five year collaboration index in library and information science in Iran was 1.58, and the average degree of collaboration between researchers in the investigated papers was 0.46, which shows that library and information science researchers have a tendency for co-authorship. However, the co-authorship had increased in recent years reaching its highest number in year 1388. The researchers’ collaboration coefficient also shows relative increase between years 1384 and 1388. National Journal of Librarianship and Information Organization has the highest rank among all the investigated journals based on collaboration coefficient, collaboration index (CI), and degree of collaboration (DC). PMID:25250365
Siamaki, Saba; Geraei, Ehsan; Zare-Farashbandi, Firoozeh
2014-01-01
Scientific collaboration is among the most important subjects in scientometrics, and many studies have investigated this concept to this day. The goal of the current study is investigation of scientific collaboration and co-authorship patterns of researchers in the field of library and information science in Iran between years 2005 and 2009. The current study uses scientometrics method. The statistical population consists of 942 documents published in Iranian library and information science journals between years 2005 and 2009. Collaboration coefficient, collaboration index (CI), and degree of collaboration (DC) were used for data analysis. The findings showed that among 942 investigated documents, 506 documents (53.70%) was created by one individual researcher and 436 documents (46.30%) were the result of collaboration between two or more researchers. Also, the highest rank of different authorship patterns belonged to National Journal of Librarianship and Information Organization (code H). The average collaboration coefficient for the library and information science researchers in the investigated time frame was 0.23. The closer this coefficient is to 1, the higher is the level of collaboration between authors, and a coefficient near zero shows a tendency to prefer individual articles. The highest collaboration index with an average of 1.92 authors per paper was seen in year 1388. The five year collaboration index in library and information science in Iran was 1.58, and the average degree of collaboration between researchers in the investigated papers was 0.46, which shows that library and information science researchers have a tendency for co-authorship. However, the co-authorship had increased in recent years reaching its highest number in year 1388. The researchers' collaboration coefficient also shows relative increase between years 1384 and 1388. National Journal of Librarianship and Information Organization has the highest rank among all the investigated journals based on collaboration coefficient, collaboration index (CI), and degree of collaboration (DC).
Effects of Print Publication Lag in Dual Format Journals on Scientometric Indicators
Heneberg, Petr
2013-01-01
Background Publication lag between manuscript submission and its final publication is considered as an important factor affecting the decision to submit, the timeliness of presented data, and the scientometric measures of the particular journal. Dual-format peer-reviewed journals (publishing both print and online editions of their content) adopted a broadly accepted strategy to shorten the publication lag: to publish the accepted manuscripts online ahead of their print editions, which may follow days, but also years later. Effects of this widespread habit on the immediacy index (average number of times an article is cited in the year it is published) calculation were never analyzed. Methodology/Principal Findings Scopus database (which contains nearly up-to-date documents in press, but does not reveal citations by these documents until they are finalized) was searched for the journals with the highest total counts of articles in press, or highest counts of articles in press appearing online in 2010–2011. Number of citations received by the articles in press available online was found to be nearly equal to citations received within the year when the document was assigned to a journal issue. Thus, online publication of in press articles affects severely the calculation of immediacy index of their source titles, and disadvantages online-only and print-only journals when evaluating them according to the immediacy index and probably also according to the impact factor and similar measures. Conclusions/Significance Caution should be taken when evaluating dual-format journals supporting long publication lag. Further research should answer the question, on whether the immediacy index should be replaced by an indicator based on the date of first publication (online or in print, whichever comes first) to eliminate the problems analyzed in this report. Information value of immediacy index is further questioned by very high ratio of authors’ self-citations among the citation window used for its calculation. PMID:23573216
Zarei, Mozhdeh; Bagheri-Saweh, Mohammad Iraj; Rasolabadi, Masoud; Vakili, Ronak; Seidi, Jamal; Kalhor, Marya Maryam; Etaee, Farshid; Gharib, Alireza
2017-01-01
Introduction As a common type of malignancy, breast cancer is one of the major causes of death in women globally. The purpose of the current study was to analyze Iran research performance on Breast Cancer in the context of national and international studies, shown in the publications indexed in Scopus database during 1991–2015. Methods Data were retrieved from the Scopus citation database in this scientometric study. The following string was employed; “breast cancer OR breast malignancy OR breast tumor OR mammary ductal carcinoma” keywords in the main title, abstract and keywords and Iran in the affiliation field were the main related keywords. The terms used were searched in Scopus using the tab specified for searching documents. Time span analyzed was 1991 to 2015 inclusive. Using the analyzing software of Scopus, we analyzed the results. Results Iran’s increasing publication production during 1991–2015 in breast cancer research which indexed in Scopus, consists of 2,399 papers with an average of 95.96 papers per year, and achieved an h-index of 48. Iranian cancer research articles have received 15,574 citations during 1991–2015, and average citations per paper were 6.49. Iran ranked 27th among the top 30 nations with a worldwide stake of 0.67 %, the 20 top publishing journals published 744 (31%) Iranian research articles on breast cancer, among them, there were 15 Iranian journals. Conclusion The number of Iranian research papers on breast cancer and also the number of citations to them, is increasing. Although the quantity and quality of papers are increasing, regarding the prevalence of breast cancer in Iran and also the ineffectiveness of screening programs in the early detection of the cases, more effort should be made, and Iranian policy makers should consider more investment on breast cancer research. PMID:28465812
Influenza: a scientometric and density-equalizing analysis.
Fricke, Ralph; Uibel, Stefanie; Klingelhoefer, Doris; Groneberg, David A
2013-09-30
Novel influenza in 2009 caused by H1N1, as well as the seasonal influenza, still are a challenge for the public health sectors worldwide. An increasing number of publications referring to this infectious disease make it difficult to distinguish relevant research output. The current study used scientometric indices for a detailed investigation on influenza related research activity and the method of density equalizing mapping to make the differences of the overall research worldwide obvious. The aim of the study was to compare scientific effort over the time as well as geographical distribution including the cooperation on national and international level. Therefore, publication data was retrieved from Web of Science (WoS) of Thomson Scientific. Subsequently the data was analysed in order to show geographical distributions and the development of the research output over the time.The query retrieved 51,418 publications that are listed in WoS for the time interval from 1900 to 2009. There is a continuous increase in research output and general citation activity especially since 1990. The identified all in all 51,418 publications were published by researchers from 151 different countries. Scientists from the USA participate in more than 37 percent of all publications, followed by researchers from the UK and Germany with more than five percent. In addition, the USA is in the focus of international cooperation.In terms of number of publications on influenza, the Journal of Virology ranks first, followed by Vaccine and Virology. The highest impact factor (IF 2009) in this selection can be established for The Lancet (30.75). Robert Webster seems to be the most prolific author contributing the most publications in the field of influenza. This study reveals an increasing and wide research interest in influenza. Nevertheless, citation based-declaration of scientific quality should be considered critically due to distortion by self-citation and co-authorship.
Del Ponte, Emerson M; Pethybridge, Sarah J; Bock, Clive H; Michereff, Sami J; Machado, Franklin J; Spolti, Piérri
2017-10-01
Standard area diagrams (SAD) have long been used as a tool to aid the estimation of plant disease severity, an essential variable in phytopathometry. Formal validation of SAD was not considered prior to the early 1990s, when considerable effort began to be invested developing SAD and assessing their value for improving accuracy of estimates of disease severity in many pathosystems. Peer-reviewed literature post-1990 was identified, selected, and cataloged in bibliographic software for further scrutiny and extraction of scientometric, pathosystem-related, and methodological-related data. In total, 105 studies (127 SAD) were found and authored by 327 researchers from 10 countries, mainly from Brazil. The six most prolific authors published at least seven studies. The scientific impact of a SAD article, based on annual citations after publication year, was affected by disease significance, the journal's impact factor, and methodological innovation. The reviewed SAD encompassed 48 crops and 103 unique diseases across a range of plant organs. Severity was quantified largely by image analysis software such as QUANT, APS-Assess, or a LI-COR leaf area meter. The most typical SAD comprised five to eight black-and-white drawings of leaf diagrams, with severity increasing nonlinearly. However, there was a trend toward using true-color photographs or stylized representations in a range of color combinations and more linear (equally spaced) increments of severity. A two-step SAD validation approach was used in 78 of 105 studies for which linear regression was the preferred method but a trend toward using Lin's correlation concordance analysis and hypothesis tests to detect the effect of SAD on accuracy was apparent. Reliability measures, when obtained, mainly considered variation among rather than within raters. The implications of the findings and knowledge gaps are discussed. A list of best practices for designing and implementing SAD and a website called SADBank for hosting SAD research data are proposed.
NASA Astrophysics Data System (ADS)
Nagai, S.; Eto, S.; Tadokoro, K.; Watanabe, T.
2011-12-01
On-land geodetic observations are not enough to monitor crustal activities in and around the subduction zone, so seafloor geodetic observations have been required. However, present accuracy of seafloor geodetic observation is an order of 1 cm or larger, which is difficult to detect differences from plate motion in short time interval, which means a plate coupling rate and its spatio-temporal variation. Our group has been developed observation system and methodology for seafloor geodesy, which is combined kinematic GPS and ocean acoustic ranging. One of influence factors is acoustic velocity change in ocean, due to change in temperature, ocean currents in different scale, and so on. A typical perturbation of acoustic velocity makes an order of 1 ms difference in travel time, which corresponds to 1 m difference in ray length. We have investigated this effect in seafloor geodesy using both observed and synthetic data to reduce estimation error of benchmarker (transponder) positions and to develop our strategy for observation and its analyses. In this paper, we focus on forward modeling of travel times of acoustic ranging data and recovery tests using synthetic data comparing with observed results [Eto et al., 2011; in this meeting]. Estimation procedure for benchmarker positions is similar to those used in earthquake location method and seismic tomography. So we have applied methods in seismic study, especially in tomographic inversion. First, we use method of a one-dimensional velocity inversion with station corrections, proposed by Kissling et al. [1994], to detect spatio-temporal change in ocean acoustic velocity from observed data in the Suruga-Nankai Trough, Japan. From these analyses, some important information has been clarified in travel time data [Eto et al., 2011]. Most of them can explain small velocity anomaly at a depth of 300m or shallower, through forward modeling of travel time data using simple velocity structure with velocity anomaly. However, due to simple data acquisition procedure, we cannot detect velocity anomaly(s) in space and time precisely, that is size of anomaly and its (their) movement. As a next step, we demonstrate recovery of benchmarker positions in tomographic inversion using synthetic data including anomalous travel time data to develop idea to calculate benchmarker positions with high-accuracy. In the tomographic inversion, we introduce some constraints corresponding to realistic conditions. This step gives us new developed system to detect crustal deformation in seafloor geodesy and new findings for understanding these in and around plate boundaries.
Itri, Jason N; Jones, Lisa P; Kim, Woojin; Boonn, William W; Kolansky, Ana S; Hilton, Susan; Zafar, Hanna M
2014-04-01
Monitoring complications and diagnostic yield for image-guided procedures is an important component of maintaining high quality patient care promoted by professional societies in radiology and accreditation organizations such as the American College of Radiology (ACR) and Joint Commission. These outcome metrics can be used as part of a comprehensive quality assurance/quality improvement program to reduce variation in clinical practice, provide opportunities to engage in practice quality improvement, and contribute to developing national benchmarks and standards. The purpose of this article is to describe the development and successful implementation of an automated web-based software application to monitor procedural outcomes for US- and CT-guided procedures in an academic radiology department. The open source tools PHP: Hypertext Preprocessor (PHP) and MySQL were used to extract relevant procedural information from the Radiology Information System (RIS), auto-populate the procedure log database, and develop a user interface that generates real-time reports of complication rates and diagnostic yield by site and by operator. Utilizing structured radiology report templates resulted in significantly improved accuracy of information auto-populated from radiology reports, as well as greater compliance with manual data entry. An automated web-based procedure log database is an effective tool to reliably track complication rates and diagnostic yield for US- and CT-guided procedures performed in a radiology department.
Amirghasemi, Mehrdad; Zamani, Reza
2014-01-01
This paper presents an effective procedure for solving the job shop problem. Synergistically combining small and large neighborhood schemes, the procedure consists of four components, namely (i) a construction method for generating semi-active schedules by a forward-backward mechanism, (ii) a local search for manipulating a small neighborhood structure guided by a tabu list, (iii) a feedback-based mechanism for perturbing the solutions generated, and (iv) a very large-neighborhood local search guided by a forward-backward shifting bottleneck method. The combination of shifting bottleneck mechanism and tabu list is used as a means of the manipulation of neighborhood structures, and the perturbation mechanism employed diversifies the search. A feedback mechanism, called repeat-check, detects consequent repeats and ignites a perturbation when the total number of consecutive repeats for two identical makespan values reaches a given threshold. The results of extensive computational experiments on the benchmark instances indicate that the combination of these four components is synergetic, in the sense that they collectively make the procedure fast and robust.
Limitations of Community College Benchmarking and Benchmarks
ERIC Educational Resources Information Center
Bers, Trudy H.
2006-01-01
This chapter distinguishes between benchmarks and benchmarking, describes a number of data and cultural limitations to benchmarking projects, and suggests that external demands for accountability are the dominant reason for growing interest in benchmarking among community colleges.
Elemental analysis by IBA and NAA — A critical comparison
NASA Astrophysics Data System (ADS)
Watterson, J. I. W.
1988-12-01
In this review neutron activation analysis (NAA) and ion beam analysis (IBA) have been compared in the context of the entire field of analytical science using the discipline of scientometrics, as developed by Braun and Lyon. This perspective on the relative achievements of the two methods is modified by considering and comparing their particular attributes and characteristics, particularly in relation to their differing degree of maturity. This assessment shows that NAA, as the more mature method, is the most widely applied nuclear technique, but the special capabilities of IBA give it the ability to provide information about surface composition and elemental distribution that is unique, while it is still relatively immature and it is not yet possible to define its ultimate role with any confidence.
Joint estimation of motion and illumination change in a sequence of images
NASA Astrophysics Data System (ADS)
Koo, Ja-Keoung; Kim, Hyo-Hun; Hong, Byung-Woo
2015-09-01
We present an algorithm that simultaneously computes optical flow and estimates illumination change from an image sequence in a unified framework. We propose an energy functional consisting of conventional optical flow energy based on Horn-Schunck method and an additional constraint that is designed to compensate for illumination changes. Any undesirable illumination change that occurs in the imaging procedure in a sequence while the optical flow is being computed is considered a nuisance factor. In contrast to the conventional optical flow algorithm based on Horn-Schunck functional, which assumes the brightness constancy constraint, our algorithm is shown to be robust with respect to temporal illumination changes in the computation of optical flows. An efficient conjugate gradient descent technique is used in the optimization procedure as a numerical scheme. The experimental results obtained from the Middlebury benchmark dataset demonstrate the robustness and the effectiveness of our algorithm. In addition, comparative analysis of our algorithm and Horn-Schunck algorithm is performed on the additional test dataset that is constructed by applying a variety of synthetic bias fields to the original image sequences in the Middlebury benchmark dataset in order to demonstrate that our algorithm outperforms the Horn-Schunck algorithm. The superior performance of the proposed method is observed in terms of both qualitative visualizations and quantitative accuracy errors when compared to Horn-Schunck optical flow algorithm that easily yields poor results in the presence of small illumination changes leading to violation of the brightness constancy constraint.
Benchmarking Procedures for High-Throughput Context Specific Reconstruction Algorithms
Pacheco, Maria P.; Pfau, Thomas; Sauter, Thomas
2016-01-01
Recent progress in high-throughput data acquisition has shifted the focus from data generation to processing and understanding of how to integrate collected information. Context specific reconstruction based on generic genome scale models like ReconX or HMR has the potential to become a diagnostic and treatment tool tailored to the analysis of specific individuals. The respective computational algorithms require a high level of predictive power, robustness and sensitivity. Although multiple context specific reconstruction algorithms were published in the last 10 years, only a fraction of them is suitable for model building based on human high-throughput data. Beside other reasons, this might be due to problems arising from the limitation to only one metabolic target function or arbitrary thresholding. This review describes and analyses common validation methods used for testing model building algorithms. Two major methods can be distinguished: consistency testing and comparison based testing. The first is concerned with robustness against noise, e.g., missing data due to the impossibility to distinguish between the signal and the background of non-specific binding of probes in a microarray experiment, and whether distinct sets of input expressed genes corresponding to i.e., different tissues yield distinct models. The latter covers methods comparing sets of functionalities, comparison with existing networks or additional databases. We test those methods on several available algorithms and deduce properties of these algorithms that can be compared with future developments. The set of tests performed, can therefore serve as a benchmarking procedure for future algorithms. PMID:26834640
Kinoshita, Manabu; Taniguchi, Mai; Takagaki, Masatoshi; Seike, Nobuhisa; Hashimoto, Naoya; Yoshimine, Toshiki
2015-05-01
Neurosurgical patties are the most frequently used instruments during neurosurgical procedures, and their high performance is required to ensure safe operations. They must offer cushioning, water-absorbing, water-retaining, and non-tissue adherent characteristics. Here, the authors describe a revised neurosurgical patty that is superior in all respects to the conventional patty available in Japan. Patty characteristics were critically and scientifically evaluated using various in vitro assays. Moreover, a novel ex vivo evaluation system focusing on the adherent characteristics of the neurosurgical patty was developed. The proposed assay could provide benchmark data for comparing different neurosurgical patties, offering neurosurgeons objective data on the performance of patties. The newly developed patty was also evaluated in real neurosurgical settings and showed superb performance during various neurosurgical procedures.
Solving satisfiability problems using a novel microarray-based DNA computer.
Lin, Che-Hsin; Cheng, Hsiao-Ping; Yang, Chang-Biau; Yang, Chia-Ning
2007-01-01
An algorithm based on a modified sticker model accompanied with an advanced MEMS-based microarray technology is demonstrated to solve SAT problem, which has long served as a benchmark in DNA computing. Unlike conventional DNA computing algorithms needing an initial data pool to cover correct and incorrect answers and further executing a series of separation procedures to destroy the unwanted ones, we built solutions in parts to satisfy one clause in one step, and eventually solve the entire Boolean formula through steps. No time-consuming sample preparation procedures and delicate sample applying equipment were required for the computing process. Moreover, experimental results show the bound DNA sequences can sustain the chemical solutions during computing processes such that the proposed method shall be useful in dealing with large-scale problems.
Chaudhry-Waterman, Nadia; Coombs, Sandra; Porras, Diego; Holzer, Ralf; Bergersen, Lisa
2014-01-01
The broad range of relatively rare procedures performed in pediatric cardiac catheterization laboratories has made the standardization of care and risk assessment in the field statistically quite problematic. However, with the growing number of patients who undergo cardiac catheterization, it has become imperative that the cardiology community overcomes these challenges to study patient outcomes. The Congenital Cardiac Catheterization Project on Outcomes was able to develop benchmarks, tools for measurement, and risk adjustment methods while exploring procedural efficacy. Based on the success of these efforts, the collaborative is pursuing a follow-up project, the Congenital Cardiac Catheterization Project on Outcomes-Quality Improvement, aimed at improving the outcomes for all patients undergoing catheterization for congenital heart disease by reducing radiation exposure.
Higher-Order Compact Schemes for Numerical Simulation of Incompressible Flows
NASA Technical Reports Server (NTRS)
Wilson, Robert V.; Demuren, Ayodeji O.; Carpenter, Mark
1998-01-01
A higher order accurate numerical procedure has been developed for solving incompressible Navier-Stokes equations for 2D or 3D fluid flow problems. It is based on low-storage Runge-Kutta schemes for temporal discretization and fourth and sixth order compact finite-difference schemes for spatial discretization. The particular difficulty of satisfying the divergence-free velocity field required in incompressible fluid flow is resolved by solving a Poisson equation for pressure. It is demonstrated that for consistent global accuracy, it is necessary to employ the same order of accuracy in the discretization of the Poisson equation. Special care is also required to achieve the formal temporal accuracy of the Runge-Kutta schemes. The accuracy of the present procedure is demonstrated by application to several pertinent benchmark problems.
Benchmarking specialty hospitals, a scoping review on theory and practice.
Wind, A; van Harten, W H
2017-04-04
Although benchmarking may improve hospital processes, research on this subject is limited. The aim of this study was to provide an overview of publications on benchmarking in specialty hospitals and a description of study characteristics. We searched PubMed and EMBASE for articles published in English in the last 10 years. Eligible articles described a project stating benchmarking as its objective and involving a specialty hospital or specific patient category; or those dealing with the methodology or evaluation of benchmarking. Of 1,817 articles identified in total, 24 were included in the study. Articles were categorized into: pathway benchmarking, institutional benchmarking, articles on benchmark methodology or -evaluation and benchmarking using a patient registry. There was a large degree of variability:(1) study designs were mostly descriptive and retrospective; (2) not all studies generated and showed data in sufficient detail; and (3) there was variety in whether a benchmarking model was just described or if quality improvement as a consequence of the benchmark was reported upon. Most of the studies that described a benchmark model described the use of benchmarking partners from the same industry category, sometimes from all over the world. Benchmarking seems to be more developed in eye hospitals, emergency departments and oncology specialty hospitals. Some studies showed promising improvement effects. However, the majority of the articles lacked a structured design, and did not report on benchmark outcomes. In order to evaluate the effectiveness of benchmarking to improve quality in specialty hospitals, robust and structured designs are needed including a follow up to check whether the benchmark study has led to improvements.
Ellis, Judith
2006-07-01
The aim of this article is to review published descriptions of benchmarking activity and synthesize benchmarking principles to encourage the acceptance and use of Essence of Care as a new benchmarking approach to continuous quality improvement, and to promote its acceptance as an integral and effective part of benchmarking activity in health services. The Essence of Care, was launched by the Department of Health in England in 2001 to provide a benchmarking tool kit to support continuous improvement in the quality of fundamental aspects of health care, for example, privacy and dignity, nutrition and hygiene. The tool kit is now being effectively used by some frontline staff. However, use is inconsistent, with the value of the tool kit, or the support clinical practice benchmarking requires to be effective, not always recognized or provided by National Health Service managers, who are absorbed with the use of quantitative benchmarking approaches and measurability of comparative performance data. This review of published benchmarking literature, was obtained through an ever-narrowing search strategy commencing from benchmarking within quality improvement literature through to benchmarking activity in health services and including access to not only published examples of benchmarking approaches and models used but the actual consideration of web-based benchmarking data. This supported identification of how benchmarking approaches have developed and been used, remaining true to the basic benchmarking principles of continuous improvement through comparison and sharing (Camp 1989). Descriptions of models and exemplars of quantitative and specifically performance benchmarking activity in industry abound (Camp 1998), with far fewer examples of more qualitative and process benchmarking approaches in use in the public services and then applied to the health service (Bullivant 1998). The literature is also in the main descriptive in its support of the effectiveness of benchmarking activity and although this does not seem to have restricted its popularity in quantitative activity, reticence about the value of the more qualitative approaches, for example Essence of Care, needs to be overcome in order to improve the quality of patient care and experiences. The perceived immeasurability and subjectivity of Essence of Care and clinical practice benchmarks means that these benchmarking approaches are not always accepted or supported by health service organizations as valid benchmarking activity. In conclusion, Essence of Care benchmarking is a sophisticated clinical practice benchmarking approach which needs to be accepted as an integral part of health service benchmarking activity to support improvement in the quality of patient care and experiences.
Global harmonization of quality assurance naming conventions in radiation therapy clinical trials.
Melidis, Christos; Bosch, Walther R; Izewska, Joanna; Fidarova, Elena; Zubizarreta, Eduardo; Ulin, Kenneth; Ishikura, Satoshi; Followill, David; Galvin, James; Haworth, Annette; Besuijen, Deidre; Clark, Catharine H; Clark, Clark H; Miles, Elizabeth; Aird, Edwin; Weber, Damien C; Hurkmans, Coen W; Verellen, Dirk
2014-12-01
To review the various radiation therapy quality assurance (RTQA) procedures used by the Global Clinical Trials RTQA Harmonization Group (GHG) steering committee members and present the harmonized RTQA naming conventions by amalgamating procedures with similar objectives. A survey of the GHG steering committee members' RTQA procedures, their goals, and naming conventions was conducted. The RTQA procedures were classified as baseline, preaccrual, and prospective/retrospective data capture and analysis. After all the procedures were accumulated and described, extensive discussions took place to come to harmonized RTQA procedures and names. The RTQA procedures implemented within a trial by the GHG steering committee members vary in quantity, timing, name, and compliance criteria. The procedures of each member are based on perceived chances of noncompliance, so that the quality of radiation therapy planning and treatment does not negatively influence the trial measured outcomes. A comparison of these procedures demonstrated similarities among the goals of the various methods, but the naming given to each differed. After thorough discussions, the GHG steering committee members amalgamated the 27 RTQA procedures to 10 harmonized ones with corresponding names: facility questionnaire, beam output audit, benchmark case, dummy run, complex treatment dosimetry check, virtual phantom, individual case review, review of patients' treatment records, and protocol compliance and dosimetry site visit. Harmonized RTQA harmonized naming conventions, which can be used in all future clinical trials involving radiation therapy, have been established. Harmonized procedures will facilitate future intergroup trial collaboration and help to ensure comparable RTQA between international trials, which enables meta-analyses and reduces RTQA workload for intergroup studies. Copyright © 2014 Elsevier Inc. All rights reserved.
Automatic yield-line analysis of slabs using discontinuity layout optimization
Gilbert, Matthew; He, Linwei; Smith, Colin C.; Le, Canh V.
2014-01-01
The yield-line method of analysis is a long established and extremely effective means of estimating the maximum load sustainable by a slab or plate. However, although numerous attempts to automate the process of directly identifying the critical pattern of yield-lines have been made over the past few decades, to date none has proved capable of reliably analysing slabs of arbitrary geometry. Here, it is demonstrated that the discontinuity layout optimization (DLO) procedure can successfully be applied to such problems. The procedure involves discretization of the problem using nodes inter-connected by potential yield-line discontinuities, with the critical layout of these then identified using linear programming. The procedure is applied to various benchmark problems, demonstrating that highly accurate solutions can be obtained, and showing that DLO provides a truly systematic means of directly and reliably automatically identifying yield-line patterns. Finally, since the critical yield-line patterns for many problems are found to be quite complex in form, a means of automatically simplifying these is presented. PMID:25104905
Cohen, Mark E; Ko, Clifford Y; Bilimoria, Karl Y; Zhou, Lynn; Huffman, Kristopher; Wang, Xue; Liu, Yaoming; Kraemer, Kari; Meng, Xiangju; Merkow, Ryan; Chow, Warren; Matel, Brian; Richards, Karen; Hart, Amy J; Dimick, Justin B; Hall, Bruce L
2013-08-01
The American College of Surgeons National Surgical Quality Improvement Program (ACS NSQIP) collects detailed clinical data from participating hospitals using standardized data definitions, analyzes these data, and provides participating hospitals with reports that permit risk-adjusted comparisons with a surgical quality standard. Since its inception, the ACS NSQIP has worked to refine surgical outcomes measurements and enhance statistical methods to improve the reliability and validity of this hospital profiling. From an original focus on controlling for between-hospital differences in patient risk factors with logistic regression, ACS NSQIP has added a variable to better adjust for the complexity and risk profile of surgical procedures (procedure mix adjustment) and stabilized estimates derived from small samples by using a hierarchical model with shrinkage adjustment. New models have been developed focusing on specific surgical procedures (eg, "Procedure Targeted" models), which provide opportunities to incorporate indication and other procedure-specific variables and outcomes to improve risk adjustment. In addition, comparative benchmark reports given to participating hospitals have been expanded considerably to allow more detailed evaluations of performance. Finally, procedures have been developed to estimate surgical risk for individual patients. This article describes the development of, and justification for, these new statistical methods and reporting strategies in ACS NSQIP. Copyright © 2013 American College of Surgeons. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Bell, Michael A.
1999-01-01
Informal benchmarking using personal or professional networks has taken place for many years at the Kennedy Space Center (KSC). The National Aeronautics and Space Administration (NASA) recognized early on, the need to formalize the benchmarking process for better utilization of resources and improved benchmarking performance. The need to compete in a faster, better, cheaper environment has been the catalyst for formalizing these efforts. A pioneering benchmarking consortium was chartered at KSC in January 1994. The consortium known as the Kennedy Benchmarking Clearinghouse (KBC), is a collaborative effort of NASA and all major KSC contractors. The charter of this consortium is to facilitate effective benchmarking, and leverage the resulting quality improvements across KSC. The KBC acts as a resource with experienced facilitators and a proven process. One of the initial actions of the KBC was to develop a holistic methodology for Center-wide benchmarking. This approach to Benchmarking integrates the best features of proven benchmarking models (i.e., Camp, Spendolini, Watson, and Balm). This cost-effective alternative to conventional Benchmarking approaches has provided a foundation for consistent benchmarking at KSC through the development of common terminology, tools, and techniques. Through these efforts a foundation and infrastructure has been built which allows short duration benchmarking studies yielding results gleaned from world class partners that can be readily implemented. The KBC has been recognized with the Silver Medal Award (in the applied research category) from the International Benchmarking Clearinghouse.
Etard, Cécile; Bigand, Emeline; Salvat, Cécile; Vidal, Vincent; Beregi, Jean Paul; Hornbeck, Amaury; Greffier, Joël
2017-10-01
A national retrospective survey on patient doses was performed by the French Society of Medical physicists to assess reference levels (RLs) in interventional radiology as required by the European Directive 2013/59/Euratom. Fifteen interventional procedures in neuroradiology, vascular radiology and osteoarticular procedures were analysed. Kerma area product (KAP), fluoroscopy time (FT), reference air kerma and number of images were recorded for 10 to 30 patients per procedure. RLs were calculated as the 3rd quartiles of the distributions. Results on 4600 procedures from 36 departments confirmed the large variability in patient dose for the same procedure. RLs were proposed for the four dosimetric estimators and the 15 procedures. RLs in terms of KAP and FT were 90 Gm.cm 2 and 11 mins for cerebral angiography, 35 Gy.cm 2 and 16 mins for biliary drainage, 75 Gy.cm 2 and 6 mins for lower limbs arteriography and 70 Gy.cm 2 and 11 mins for vertebroplasty. For these four procedures, RLs were defined according to the complexity of the procedure. For all the procedures, the results were lower than most of those already published. This study reports RLs in interventional radiology based on a national survey. Continual evolution of practices and technologies requires regular updates of RLs. • Delivered dose in interventional radiology depends on procedure, practice and patient. • National RLs are proposed for 15 interventional procedures. • Reference levels (RLs) are useful to benchmark practices and optimize protocols. • RLs are proposed for kerma area product, air kerma, fluoroscopy time and number of images. • RLs should be adapted to the procedure complexity and updated regularly.
Renormalization group contraction of tensor networks in three dimensions
NASA Astrophysics Data System (ADS)
García-Sáez, Artur; Latorre, José I.
2013-02-01
We present a new strategy for contracting tensor networks in arbitrary geometries. This method is designed to follow as strictly as possible the renormalization group philosophy, by first contracting tensors in an exact way and, then, performing a controlled truncation of the resulting tensor. We benchmark this approximation procedure in two dimensions against an exact contraction. We then apply the same idea to a three-dimensional quantum system. The underlying rational for emphasizing the exact coarse graining renormalization group step prior to truncation is related to monogamy of entanglement.
Cost analysis helps evaluate contract profitability.
Sides, R W
2000-02-01
A cost-accounting analysis can help group practices assess their costs of doing business and determine the profitability of managed care contracts. Group practices also can use cost accounting to develop budgets and financial benchmarks. To begin a cost analysis, group practices need to determine their revenue and cost centers. Then they can allocate their costs to each center, using an appropriate allocation basis. The next step is to calculate costs per procedure. The results can be used to evaluate operational cost efficiency as well as help negotiate managed care contracts.
[Controlling instruments in radiology].
Maurer, M
2013-10-01
Due to the rising costs and competitive pressures radiological clinics and practices are now facing, controlling instruments are gaining importance in the optimization of structures and processes of the various diagnostic examinations and interventional procedures. It will be shown how the use of selected controlling instruments can secure and improve the performance of radiological facilities. A definition of the concept of controlling will be provided. It will be shown which controlling instruments can be applied in radiological departments and practices. As an example, two of the controlling instruments, material cost analysis and benchmarking, will be illustrated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suter, G.W. II; Tsao, C.L.
1996-06-01
This report presents potential screening benchmarks for protection of aquatic life form contaminants in water. Because there is no guidance for screening for benchmarks, a set of alternative benchmarks is presented herein. This report presents the alternative benchmarks for chemicals that have been detected on the Oak Ridge Reservation. It also presents the data used to calculate the benchmarks and the sources of the data. It compares the benchmarks and discusses their relative conservatism and utility. Also included is the updates of benchmark values where appropriate, new benchmark values, secondary sources are replaced by primary sources, and a more completemore » documentation of the sources and derivation of all values are presented.« less
Incremental cost effectiveness evaluation in clinical research.
Krummenauer, Frank; Landwehr, I
2005-01-28
The health economic evaluation of therapeutic and diagnostic strategies is of increasing importance in clinical research. Therefore also clinical trialists have to involve health economic aspects more frequently. However, whereas they are quite familiar with classical effect measures in clinical trials, the corresponding parameters in health economic evaluation of therapeutic and diagnostic procedures are still not this common. The concepts of incremental cost effectiveness ratios (ICERs) and incremental net health benefit (INHB) will be illustrated and contrasted along the cost effectiveness evaluation of cataract surgery with monofocal and multifocal intraocular lenses. ICERs relate the costs of a treatment to its clinical benefit in terms of a ratio expression (indexed as Euro per clinical benefit unit). Therefore ICERs can be directly compared to a pre-specified willingness to pay (WTP) benchmark, which represents the maximum costs, health insurers would invest to achieve one clinical benefit unit. INHBs estimate a treatment's net clinical benefit after accounting for its cost increase versus an established therapeutic standard. Resource allocation rules can be formulated by means of both effect measures. Both the ICER and the INHB approach enable the definition of directional resource allocation rules. The allocation decisions arising from these rules are identical, as long as the willingness to pay benchmark is fixed in advance. Therefore both strategies crucially call for a priori determination of both the underlying clinical benefit endpoint (such as gain in vision lines after cataract surgery or gain in quality-adjusted life years) and the corresponding willingness to pay benchmark. The use of incremental cost effectiveness and net health benefit estimates provides a rationale for health economic allocation discussions and founding decisions. It implies the same requirements on trial protocols as yet established for clinical trials, that is the a priori definition of primary hypotheses (formulated as an allocation rule involving a pre-specified willingness to pay benchmark) and the primary clinical benefit endpoint (as a rationale for effectiveness evaluation).
Benchmarking in emergency health systems.
Kennedy, Marcus P; Allen, Jacqueline; Allen, Greg
2002-12-01
This paper discusses the role of benchmarking as a component of quality management. It describes the historical background of benchmarking, its competitive origin and the requirement in today's health environment for a more collaborative approach. The classical 'functional and generic' types of benchmarking are discussed with a suggestion to adopt a different terminology that describes the purpose and practicalities of benchmarking. Benchmarking is not without risks. The consequence of inappropriate focus and the need for a balanced overview of process is explored. The competition that is intrinsic to benchmarking is questioned and the negative impact it may have on improvement strategies in poorly performing organizations is recognized. The difficulty in achieving cross-organizational validity in benchmarking is emphasized, as is the need to scrutinize benchmarking measures. The cost effectiveness of benchmarking projects is questioned and the concept of 'best value, best practice' in an environment of fixed resources is examined.
NASA Technical Reports Server (NTRS)
Bailey, David (Editor); Barton, John (Editor); Lasinski, Thomas (Editor); Simon, Horst (Editor)
1993-01-01
A new set of benchmarks was developed for the performance evaluation of highly parallel supercomputers. These benchmarks consist of a set of kernels, the 'Parallel Kernels,' and a simulated application benchmark. Together they mimic the computation and data movement characteristics of large scale computational fluid dynamics (CFD) applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification - all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, B.C.J.; Sha, W.T.; Doria, M.L.
1980-11-01
The governing equations, i.e., conservation equations for mass, momentum, and energy, are solved as a boundary-value problem in space and an initial-value problem in time. BODYFIT-1FE code uses the technique of boundary-fitted coordinate systems where all the physical boundaries are transformed to be coincident with constant coordinate lines in the transformed space. By using this technique, one can prescribe boundary conditions accurately without interpolation. The transformed governing equations in terms of the boundary-fitted coordinates are then solved by using implicit cell-by-cell procedure with a choice of either central or upwind convective derivatives. It is a true benchmark rod-bundle code withoutmore » invoking any assumptions in the case of laminar flow. However, for turbulent flow, some empiricism must be employed due to the closure problem of turbulence modeling. The detailed velocity and temperature distributions calculated from the code can be used to benchmark and calibrate empirical coefficients employed in subchannel codes and porous-medium analyses.« less
An Approach for Assessing Delamination Propagation Capabilities in Commercial Finite Element Codes
NASA Technical Reports Server (NTRS)
Krueger, Ronald
2007-01-01
An approach for assessing the delamination propagation capabilities in commercial finite element codes is presented and demonstrated for one code. For this investigation, the Double Cantilever Beam (DCB) specimen and the Single Leg Bending (SLB) specimen were chosen for full three-dimensional finite element simulations. First, benchmark results were created for both specimens. Second, starting from an initially straight front, the delamination was allowed to propagate. Good agreement between the load-displacement relationship obtained from the propagation analysis results and the benchmark results could be achieved by selecting the appropriate input parameters. Selecting the appropriate input parameters, however, was not straightforward and often required an iterative procedure. Qualitatively, the delamination front computed for the DCB specimen did not take the shape of a curved front as expected. However, the analysis of the SLB specimen yielded a curved front as may be expected from the distribution of the energy release rate and the failure index across the width of the specimen. Overall, the results are encouraging but further assessment on a structural level is required.
Cotton, Stephen J.; Miller, William H.
2016-10-14
Previous work has shown how a symmetrical quasi-classical (SQC) windowing procedure can be used to quantize the initial and final electronic degrees of freedom in the Meyer-Miller (MM) classical vibronic (i.e, nuclear + electronic) Hamiltonian, and that the approach provides a very good description of electronically non-adiabatic processes within a standard classical molecular dynamics framework for a number of benchmark problems. This study explores application of the SQC/MM approach to the case of very weak non-adiabatic coupling between the electronic states, showing (as anticipated) how the standard SQC/MM approach used to date fails in this limit, and then devises amore » new SQC windowing scheme to deal with it. Finally, application of this new SQC model to a variety of realistic benchmark systems shows that the new model not only treats the weak coupling case extremely well, but it is also seen to describe the “normal” regime (of electronic transition probabilities ≳ 0.1) even more accurately than the previous “standard” model.« less
NASA Technical Reports Server (NTRS)
Krueger, Ronald
2008-01-01
An approach for assessing the delamination propagation simulation capabilities in commercial finite element codes is presented and demonstrated. For this investigation, the Double Cantilever Beam (DCB) specimen and the Single Leg Bending (SLB) specimen were chosen for full three-dimensional finite element simulations. First, benchmark results were created for both specimens. Second, starting from an initially straight front, the delamination was allowed to propagate. The load-displacement relationship and the total strain energy obtained from the propagation analysis results and the benchmark results were compared and good agreements could be achieved by selecting the appropriate input parameters. Selecting the appropriate input parameters, however, was not straightforward and often required an iterative procedure. Qualitatively, the delamination front computed for the DCB specimen did not take the shape of a curved front as expected. However, the analysis of the SLB specimen yielded a curved front as was expected from the distribution of the energy release rate and the failure index across the width of the specimen. Overall, the results are encouraging but further assessment on a structural level is required.
Pricing American Asian options with higher moments in the underlying distribution
NASA Astrophysics Data System (ADS)
Lo, Keng-Hsin; Wang, Kehluh; Hsu, Ming-Feng
2009-01-01
We develop a modified Edgeworth binomial model with higher moment consideration for pricing American Asian options. With lognormal underlying distribution for benchmark comparison, our algorithm is as precise as that of Chalasani et al. [P. Chalasani, S. Jha, F. Egriboyun, A. Varikooty, A refined binomial lattice for pricing American Asian options, Rev. Derivatives Res. 3 (1) (1999) 85-105] if the number of the time steps increases. If the underlying distribution displays negative skewness and leptokurtosis as often observed for stock index returns, our estimates can work better than those in Chalasani et al. [P. Chalasani, S. Jha, F. Egriboyun, A. Varikooty, A refined binomial lattice for pricing American Asian options, Rev. Derivatives Res. 3 (1) (1999) 85-105] and are very similar to the benchmarks in Hull and White [J. Hull, A. White, Efficient procedures for valuing European and American path-dependent options, J. Derivatives 1 (Fall) (1993) 21-31]. The numerical analysis shows that our modified Edgeworth binomial model can value American Asian options with greater accuracy and speed given higher moments in their underlying distribution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cotton, Stephen J.; Miller, William H.
Previous work has shown how a symmetrical quasi-classical (SQC) windowing procedure can be used to quantize the initial and final electronic degrees of freedom in the Meyer-Miller (MM) classical vibronic (i.e, nuclear + electronic) Hamiltonian, and that the approach provides a very good description of electronically non-adiabatic processes within a standard classical molecular dynamics framework for a number of benchmark problems. This study explores application of the SQC/MM approach to the case of very weak non-adiabatic coupling between the electronic states, showing (as anticipated) how the standard SQC/MM approach used to date fails in this limit, and then devises amore » new SQC windowing scheme to deal with it. Finally, application of this new SQC model to a variety of realistic benchmark systems shows that the new model not only treats the weak coupling case extremely well, but it is also seen to describe the “normal” regime (of electronic transition probabilities ≳ 0.1) even more accurately than the previous “standard” model.« less
Benchmarking and Performance Measurement.
ERIC Educational Resources Information Center
Town, J. Stephen
This paper defines benchmarking and its relationship to quality management, describes a project which applied the technique in a library context, and explores the relationship between performance measurement and benchmarking. Numerous benchmarking methods contain similar elements: deciding what to benchmark; identifying partners; gathering…
HPC Analytics Support. Requirements for Uncertainty Quantification Benchmarks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paulson, Patrick R.; Purohit, Sumit; Rodriguez, Luke R.
2015-05-01
This report outlines techniques for extending benchmark generation products so they support uncertainty quantification by benchmarked systems. We describe how uncertainty quantification requirements can be presented to candidate analytical tools supporting SPARQL. We describe benchmark data sets for evaluating uncertainty quantification, as well as an approach for using our benchmark generator to produce data sets for generating benchmark data sets.
Alternative Modal Basis Selection Procedures For Reduced-Order Nonlinear Random Response Simulation
NASA Technical Reports Server (NTRS)
Przekop, Adam; Guo, Xinyun; Rizi, Stephen A.
2012-01-01
Three procedures to guide selection of an efficient modal basis in a nonlinear random response analysis are examined. One method is based only on proper orthogonal decomposition, while the other two additionally involve smooth orthogonal decomposition. Acoustic random response problems are employed to assess the performance of the three modal basis selection approaches. A thermally post-buckled beam exhibiting snap-through behavior, a shallowly curved arch in the auto-parametric response regime and a plate structure are used as numerical test articles. The results of a computationally taxing full-order analysis in physical degrees of freedom are taken as the benchmark for comparison with the results from the three reduced-order analyses. For the cases considered, all three methods are shown to produce modal bases resulting in accurate and computationally efficient reduced-order nonlinear simulations.
Wang, Huilin; Wang, Mingjun; Tan, Hao; Li, Yuan; Zhang, Ziding; Song, Jiangning
2014-01-01
X-ray crystallography is the primary approach to solve the three-dimensional structure of a protein. However, a major bottleneck of this method is the failure of multi-step experimental procedures to yield diffraction-quality crystals, including sequence cloning, protein material production, purification, crystallization and ultimately, structural determination. Accordingly, prediction of the propensity of a protein to successfully undergo these experimental procedures based on the protein sequence may help narrow down laborious experimental efforts and facilitate target selection. A number of bioinformatics methods based on protein sequence information have been developed for this purpose. However, our knowledge on the important determinants of propensity for a protein sequence to produce high diffraction-quality crystals remains largely incomplete. In practice, most of the existing methods display poorer performance when evaluated on larger and updated datasets. To address this problem, we constructed an up-to-date dataset as the benchmark, and subsequently developed a new approach termed 'PredPPCrys' using the support vector machine (SVM). Using a comprehensive set of multifaceted sequence-derived features in combination with a novel multi-step feature selection strategy, we identified and characterized the relative importance and contribution of each feature type to the prediction performance of five individual experimental steps required for successful crystallization. The resulting optimal candidate features were used as inputs to build the first-level SVM predictor (PredPPCrys I). Next, prediction outputs of PredPPCrys I were used as the input to build second-level SVM classifiers (PredPPCrys II), which led to significantly enhanced prediction performance. Benchmarking experiments indicated that our PredPPCrys method outperforms most existing procedures on both up-to-date and previous datasets. In addition, the predicted crystallization targets of currently non-crystallizable proteins were provided as compendium data, which are anticipated to facilitate target selection and design for the worldwide structural genomics consortium. PredPPCrys is freely available at http://www.structbioinfor.org/PredPPCrys.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suter, G.W., II
1993-01-01
One of the initial stages in ecological risk assessment of hazardous waste sites is the screening of contaminants to determine which, if any, of them are worthy of further consideration; this process is termed contaminant screening. Screening is performed by comparing concentrations in ambient media to benchmark concentrations that are either indicative of a high likelihood of significant effects (upper screening benchmarks) or of a very low likelihood of significant effects (lower screening benchmarks). Exceedance of an upper screening benchmark indicates that the chemical in question is clearly of concern and remedial actions are likely to be needed. Exceedance ofmore » a lower screening benchmark indicates that a contaminant is of concern unless other information indicates that the data are unreliable or the comparison is inappropriate. Chemicals with concentrations below the lower benchmark are not of concern if the ambient data are judged to be adequate. This report presents potential screening benchmarks for protection of aquatic life from contaminants in water. Because there is no guidance for screening benchmarks, a set of alternative benchmarks is presented herein. The alternative benchmarks are based on different conceptual approaches to estimating concentrations causing significant effects. For the upper screening benchmark, there are the acute National Ambient Water Quality Criteria (NAWQC) and the Secondary Acute Values (SAV). The SAV concentrations are values estimated with 80% confidence not to exceed the unknown acute NAWQC for those chemicals with no NAWQC. The alternative chronic benchmarks are the chronic NAWQC, the Secondary Chronic Value (SCV), the lowest chronic values for fish and daphnids, the lowest EC20 for fish and daphnids from chronic toxicity tests, the estimated EC20 for a sensitive species, and the concentration estimated to cause a 20% reduction in the recruit abundance of largemouth bass. It is recommended that ambient chemical concentrations be compared to all of these benchmarks. If NAWQC are exceeded, the chemicals must be contaminants of concern because the NAWQC are applicable or relevant and appropriate requirements (ARARs). If NAWQC are not exceeded, but other benchmarks are, contaminants should be selected on the basis of the number of benchmarks exceeded and the conservatism of the particular benchmark values, as discussed in the text. To the extent that toxicity data are available, this report presents the alternative benchmarks for chemicals that have been detected on the Oak Ridge Reservation. It also presents the data used to calculate the benchmarks and the sources of the data. It compares the benchmarks and discusses their relative conservatism and utility. This report supersedes a prior aquatic benchmarks report (Suter and Mabrey 1994). It adds two new types of benchmarks. It also updates the benchmark values where appropriate, adds some new benchmark values, replaces secondary sources with primary sources, and provides more complete documentation of the sources and derivation of all values.« less
The KMAT: Benchmarking Knowledge Management.
ERIC Educational Resources Information Center
de Jager, Martha
Provides an overview of knowledge management and benchmarking, including the benefits and methods of benchmarking (e.g., competitive, cooperative, collaborative, and internal benchmarking). Arthur Andersen's KMAT (Knowledge Management Assessment Tool) is described. The KMAT is a collaborative benchmarking tool, designed to help organizations make…
NASA Technical Reports Server (NTRS)
Bailey, D. H.; Barszcz, E.; Barton, J. T.; Carter, R. L.; Lasinski, T. A.; Browning, D. S.; Dagum, L.; Fatoohi, R. A.; Frederickson, P. O.; Schreiber, R. S.
1991-01-01
A new set of benchmarks has been developed for the performance evaluation of highly parallel supercomputers in the framework of the NASA Ames Numerical Aerodynamic Simulation (NAS) Program. These consist of five 'parallel kernel' benchmarks and three 'simulated application' benchmarks. Together they mimic the computation and data movement characteristics of large-scale computational fluid dynamics applications. The principal distinguishing feature of these benchmarks is their 'pencil and paper' specification-all details of these benchmarks are specified only algorithmically. In this way many of the difficulties associated with conventional benchmarking approaches on highly parallel systems are avoided.
DOE Office of Scientific and Technical Information (OSTI.GOV)
The Analysis of Search Results for the Clarification and Identification of Technology Emergence (AR-CITE) computer code examines a scientometric model that tracks the emergence of an identified technology from initial discovery (via original scientific and conference literature), through critical discoveries (via original scientific, conference literature and patents), transitioning through Technology Readiness Levels (TRLs) and ultimately on to commercial currency of citations, collaboration indicators, and on-line news patterns are identified. The combinations of four distinct and separate searchable on-line networked sources (i.e. scholarly publications and citation, world patents, news archives, and on-line mapping networks) are assembled to become one collective networkmore » (a dataset for analysis of relations). This established network becomes the basis from which to quickly analyze the temporal flow of activity (searchable events) for the subject domain to be clarified and identified.« less
A Quantitative Analysis and Natural History of B. F. Skinner's Coauthoring Practices
McKerchar, Todd L; Morris, Edward K; Smith, Nathaniel G
2011-01-01
This paper describes and analyzes B. F. Skinner's coauthoring practices. After identifying his 35 coauthored publications and 27 coauthors, we analyze his coauthored works by their form (e.g., journal articles) and kind (e.g., empirical); identify the journals in which he published and their type (e.g., data-type); describe his overall and local rates of publishing with his coauthors (e.g., noting breaks in the latter); and compare his coauthoring practices with his single-authoring practices (e.g., form, kind, journal type) and with those in the scientometric literature (e.g., majority of coauthored publications are empirical). We address these findings in the context of describing the natural history of Skinner's coauthoring practices. Finally, we describe some limitations in our methods and offer suggestions for future research. PMID:22532732
Mousavi Jarahi, Alireza; Keihani, Porya; Vaziri, Esmaiel; Feizabadi, Mansoureh
2018-05-26
Today, research is seen as an investment to promote innovation and maintain sustainable social-economic development in all societies. The growth of scientific products and the expansion of knowledge in different scientific fields have entailed more attention to assessments and the impact evaluation of both outcome and process of researchers in all fields. In light of this need, policymakers in the medical field have paid more attention to evaluating the outcomes of research in terms of its impact on the society using many different indicators. In this short communication, the performance of scholarly published scientific products are discussed and the indicators that measure such impacts are evaluated and recommendation is given to APJCP’ editorial board on how to align its activities toward achieving better impact and scientometric measures for the journal. Creative Commons Attribution License
42 CFR 440.335 - Benchmark-equivalent health benefits coverage.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 42 Public Health 4 2013-10-01 2013-10-01 false Benchmark-equivalent health benefits coverage. 440... and Benchmark-Equivalent Coverage § 440.335 Benchmark-equivalent health benefits coverage. (a) Aggregate actuarial value. Benchmark-equivalent coverage is health benefits coverage that has an aggregate...
42 CFR 440.335 - Benchmark-equivalent health benefits coverage.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 42 Public Health 4 2011-10-01 2011-10-01 false Benchmark-equivalent health benefits coverage. 440... and Benchmark-Equivalent Coverage § 440.335 Benchmark-equivalent health benefits coverage. (a) Aggregate actuarial value. Benchmark-equivalent coverage is health benefits coverage that has an aggregate...
A benchmark testing ground for integrating homology modeling and protein docking.
Bohnuud, Tanggis; Luo, Lingqi; Wodak, Shoshana J; Bonvin, Alexandre M J J; Weng, Zhiping; Vajda, Sandor; Schueler-Furman, Ora; Kozakov, Dima
2017-01-01
Protein docking procedures carry out the task of predicting the structure of a protein-protein complex starting from the known structures of the individual protein components. More often than not, however, the structure of one or both components is not known, but can be derived by homology modeling on the basis of known structures of related proteins deposited in the Protein Data Bank (PDB). Thus, the problem is to develop methods that optimally integrate homology modeling and docking with the goal of predicting the structure of a complex directly from the amino acid sequences of its component proteins. One possibility is to use the best available homology modeling and docking methods. However, the models built for the individual subunits often differ to a significant degree from the bound conformation in the complex, often much more so than the differences observed between free and bound structures of the same protein, and therefore additional conformational adjustments, both at the backbone and side chain levels need to be modeled to achieve an accurate docking prediction. In particular, even homology models of overall good accuracy frequently include localized errors that unfavorably impact docking results. The predicted reliability of the different regions in the model can also serve as a useful input for the docking calculations. Here we present a benchmark dataset that should help to explore and solve combined modeling and docking problems. This dataset comprises a subset of the experimentally solved 'target' complexes from the widely used Docking Benchmark from the Weng Lab (excluding antibody-antigen complexes). This subset is extended to include the structures from the PDB related to those of the individual components of each complex, and hence represent potential templates for investigating and benchmarking integrated homology modeling and docking approaches. Template sets can be dynamically customized by specifying ranges in sequence similarity and in PDB release dates, or using other filtering options, such as excluding sets of specific structures from the template list. Multiple sequence alignments, as well as structural alignments of the templates to their corresponding subunits in the target are also provided. The resource is accessible online or can be downloaded at http://cluspro.org/benchmark, and is updated on a weekly basis in synchrony with new PDB releases. Proteins 2016; 85:10-16. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bailey, David H.
The NAS Parallel Benchmarks (NPB) are a suite of parallel computer performance benchmarks. They were originally developed at the NASA Ames Research Center in 1991 to assess high-end parallel supercomputers. Although they are no longer used as widely as they once were for comparing high-end system performance, they continue to be studied and analyzed a great deal in the high-performance computing community. The acronym 'NAS' originally stood for the Numerical Aeronautical Simulation Program at NASA Ames. The name of this organization was subsequently changed to the Numerical Aerospace Simulation Program, and more recently to the NASA Advanced Supercomputing Center, althoughmore » the acronym remains 'NAS.' The developers of the original NPB suite were David H. Bailey, Eric Barszcz, John Barton, David Browning, Russell Carter, LeoDagum, Rod Fatoohi, Samuel Fineberg, Paul Frederickson, Thomas Lasinski, Rob Schreiber, Horst Simon, V. Venkatakrishnan and Sisira Weeratunga. The original NAS Parallel Benchmarks consisted of eight individual benchmark problems, each of which focused on some aspect of scientific computing. The principal focus was in computational aerophysics, although most of these benchmarks have much broader relevance, since in a much larger sense they are typical of many real-world scientific computing applications. The NPB suite grew out of the need for a more rational procedure to select new supercomputers for acquisition by NASA. The emergence of commercially available highly parallel computer systems in the late 1980s offered an attractive alternative to parallel vector supercomputers that had been the mainstay of high-end scientific computing. However, the introduction of highly parallel systems was accompanied by a regrettable level of hype, not only on the part of the commercial vendors but even, in some cases, by scientists using the systems. As a result, it was difficult to discern whether the new systems offered any fundamental performance advantage over vector supercomputers, and, if so, which of the parallel offerings would be most useful in real-world scientific computation. In part to draw attention to some of the performance reporting abuses prevalent at the time, the present author wrote a humorous essay 'Twelve Ways to Fool the Masses,' which described in a light-hearted way a number of the questionable ways in which both vendor marketing people and scientists were inflating and distorting their performance results. All of this underscored the need for an objective and scientifically defensible measure to compare performance on these systems.« less
Palter, Vanessa N; Graafland, Maurits; Schijven, Marlies P; Grantcharov, Teodor P
2012-03-01
Although task training on virtual reality (VR) simulators has been shown to transfer to the operating room, to date no VR curricula have been described for advanced laparoscopic procedures. The purpose of this study was to develop a proficiency-based VR technical skills curriculum for laparoscopic colorectal surgery. The Delphi method was used to determine expert consensus on which VR tasks (on the LapSim simulator) are relevant to teaching laparoscopic colorectal surgery. To accomplish this task, 19 international experts rated all the LapSim tasks on a Likert scale (1-5) with respect to the degree to which they thought that a particular task should be included in a final technical skills curriculum. Results of the survey were sent back to participants until consensus (Cronbach's α >0.8) was reached. A cross-sectional design was utilized to define the benchmark scores for the identified tasks. Nine expert surgeons completed all identified tasks on the "easy," "medium," and "hard" settings of the simulator. In the first round of the survey, Cronbach's α was 0.715; after the second round, consensus was reached at 0.865. Consensus was reached for 7 basic tasks and 1 advanced suturing task. Median expert time and economy of movement scores were defined as benchmarks for all curricular tasks. This study used Delphi consensus methodology to create a curriculum for an advanced laparoscopic procedure that is reflective of current clinical practice on an international level and conforms to current educational standards of proficiency-based training. Copyright © 2012 Mosby, Inc. All rights reserved.
Smith, David; Anderson, David; Degryse, Anne-Dominique; Bol, Carla; Criado, Ana; Ferrara, Alessia; Franco, Nuno Henrique; Gyertyan, Istvan; Orellana, Jose M; Ostergaard, Grete; Varga, Orsolya; Voipio, Hanna-Marja
2018-02-01
Directive 2010/63/EU introduced requirements for the classification of the severity of procedures to be applied during the project authorisation process to use animals in scientific procedures and also to report actual severity experienced by each animal used in such procedures. These requirements offer opportunities during the design, conduct and reporting of procedures to consider the adverse effects of procedures and how these can be reduced to minimize the welfare consequences for the animals. Better recording and reporting of adverse effects should also help in highlighting priorities for refinement of future similar procedures and benchmarking good practice. Reporting of actual severity should help inform the public of the relative severity of different areas of scientific research and, over time, may show trends regarding refinement. Consistency of assignment of severity categories across Member States is a key requirement, particularly if re-use is considered, or the safeguard clause is to be invoked. The examples of severity classification given in Annex VIII are limited in number, and have little descriptive power to aid assignment. Additionally, the examples given often relate to the procedure and do not attempt to assess the outcome, such as adverse effects that may occur. The aim of this report is to deliver guidance on the assignment of severity, both prospectively and at the end of a procedure. A number of animal models, in current use, have been used to illustrate the severity assessment process from inception of the project, through monitoring during the course of the procedure to the final assessment of actual severity at the end of the procedure (Appendix 1).
Smith, David; Anderson, David; Degryse, Anne-Dominique; Bol, Carla; Criado, Ana; Ferrara, Alessia; Gyertyan, Istvan; Orellana, Jose M; Ostergaard, Grete; Varga, Orsolya; Voipio, Hanna-Marja
2018-01-01
Directive 2010/63/EU introduced requirements for the classification of the severity of procedures to be applied during the project authorisation process to use animals in scientific procedures and also to report actual severity experienced by each animal used in such procedures. These requirements offer opportunities during the design, conduct and reporting of procedures to consider the adverse effects of procedures and how these can be reduced to minimize the welfare consequences for the animals. Better recording and reporting of adverse effects should also help in highlighting priorities for refinement of future similar procedures and benchmarking good practice. Reporting of actual severity should help inform the public of the relative severity of different areas of scientific research and, over time, may show trends regarding refinement. Consistency of assignment of severity categories across Member States is a key requirement, particularly if re-use is considered, or the safeguard clause is to be invoked. The examples of severity classification given in Annex VIII are limited in number, and have little descriptive power to aid assignment. Additionally, the examples given often relate to the procedure and do not attempt to assess the outcome, such as adverse effects that may occur. The aim of this report is to deliver guidance on the assignment of severity, both prospectively and at the end of a procedure. A number of animal models, in current use, have been used to illustrate the severity assessment process from inception of the project, through monitoring during the course of the procedure to the final assessment of actual severity at the end of the procedure (Appendix 1). PMID:29359995
Global Harmonization of Quality Assurance Naming Conventions in Radiation Therapy Clinical Trials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Melidis, Christos, E-mail: christos.melidis@eortc.be; Bosch, Walther R.; Izewska, Joanna
2014-12-01
Purpose: To review the various radiation therapy quality assurance (RTQA) procedures used by the Global Clinical Trials RTQA Harmonization Group (GHG) steering committee members and present the harmonized RTQA naming conventions by amalgamating procedures with similar objectives. Methods and Materials: A survey of the GHG steering committee members' RTQA procedures, their goals, and naming conventions was conducted. The RTQA procedures were classified as baseline, preaccrual, and prospective/retrospective data capture and analysis. After all the procedures were accumulated and described, extensive discussions took place to come to harmonized RTQA procedures and names. Results: The RTQA procedures implemented within a trial by themore » GHG steering committee members vary in quantity, timing, name, and compliance criteria. The procedures of each member are based on perceived chances of noncompliance, so that the quality of radiation therapy planning and treatment does not negatively influence the trial measured outcomes. A comparison of these procedures demonstrated similarities among the goals of the various methods, but the naming given to each differed. After thorough discussions, the GHG steering committee members amalgamated the 27 RTQA procedures to 10 harmonized ones with corresponding names: facility questionnaire, beam output audit, benchmark case, dummy run, complex treatment dosimetry check, virtual phantom, individual case review, review of patients' treatment records, and protocol compliance and dosimetry site visit. Conclusions: Harmonized RTQA harmonized naming conventions, which can be used in all future clinical trials involving radiation therapy, have been established. Harmonized procedures will facilitate future intergroup trial collaboration and help to ensure comparable RTQA between international trials, which enables meta-analyses and reduces RTQA workload for intergroup studies.« less
42 CFR 440.330 - Benchmark health benefits coverage.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 42 Public Health 4 2012-10-01 2012-10-01 false Benchmark health benefits coverage. 440.330 Section 440.330 Public Health CENTERS FOR MEDICARE & MEDICAID SERVICES, DEPARTMENT OF HEALTH AND HUMAN... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is health...
A Second Law Based Unstructured Finite Volume Procedure for Generalized Flow Simulation
NASA Technical Reports Server (NTRS)
Majumdar, Alok
1998-01-01
An unstructured finite volume procedure has been developed for steady and transient thermo-fluid dynamic analysis of fluid systems and components. The procedure is applicable for a flow network consisting of pipes and various fittings where flow is assumed to be one dimensional. It can also be used to simulate flow in a component by modeling a multi-dimensional flow using the same numerical scheme. The flow domain is discretized into a number of interconnected control volumes located arbitrarily in space. The conservation equations for each control volume account for the transport of mass, momentum and entropy from the neighboring control volumes. In addition, they also include the sources of each conserved variable and time dependent terms. The source term of entropy equation contains entropy generation due to heat transfer and fluid friction. Thermodynamic properties are computed from the equation of state of a real fluid. The system of equations is solved by a hybrid numerical method which is a combination of simultaneous Newton-Raphson and successive substitution schemes. The paper also describes the application and verification of the procedure by comparing its predictions with the analytical and numerical solution of several benchmark problems.
Talasila, Sreya; Evers-Meltzer, Rachel; Xu, Shuai
2018-06-05
Minimally invasive fat reduction procedures are rapidly growing in popularity. Evaluate online patient reviews to inform practice management. Data from RealSelf.com, a popular online aesthetics platform, were reviewed for all minimally invasive fat reduction procedures. Reviews were also aggregated based on the primary method of action (e.g., laser, radiofrequency, ultrasound, etc.) and compared with liposuction. A chi-square test was used to assess for differences with the Marascuilo procedure for pairwise comparisons. A total of 13 minimally invasive fat reduction procedures were identified encompassing 11,871 total reviews. Liposuction had 4,645 total reviews and a 66% patient satisfaction rate. Minimally invasive fat reduction procedures had 7,170 aggregate reviews and a global patient satisfaction of 58%. Liposuction had statistically significantly higher patient satisfaction than cryolipolysis (55% satisfied, n = 2,707 reviews), laser therapies (61% satisfied, n = 3,565 reviews), and injectables (49% satisfied, n = 319 reviews) (p < .05). Injectables and cryolipolysis had statistically significantly lower patient satisfaction than radiofrequency therapies (63% satisfied, n = 314 reviews) and laser therapies. Ultrasound therapies had 275 reviews and a 73% patient satisfaction rate. A large number of patient reviews suggest that minimally invasive fat reduction procedures have high patient satisfaction, although liposuction still had the highest total patient satisfaction score. However, there are significant pitfalls in interpreting patient reviews, as they do not provide important data such as a patient's medical history or physician experience and skill.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suter, G.W. II; Mabrey, J.B.
1994-07-01
This report presents potential screening benchmarks for protection of aquatic life from contaminants in water. Because there is no guidance for screening benchmarks, a set of alternative benchmarks is presented herein. The alternative benchmarks are based on different conceptual approaches to estimating concentrations causing significant effects. For the upper screening benchmark, there are the acute National Ambient Water Quality Criteria (NAWQC) and the Secondary Acute Values (SAV). The SAV concentrations are values estimated with 80% confidence not to exceed the unknown acute NAWQC for those chemicals with no NAWQC. The alternative chronic benchmarks are the chronic NAWQC, the Secondary Chronicmore » Value (SCV), the lowest chronic values for fish and daphnids from chronic toxicity tests, the estimated EC20 for a sensitive species, and the concentration estimated to cause a 20% reduction in the recruit abundance of largemouth bass. It is recommended that ambient chemical concentrations be compared to all of these benchmarks. If NAWQC are exceeded, the chemicals must be contaminants of concern because the NAWQC are applicable or relevant and appropriate requirements (ARARs). If NAWQC are not exceeded, but other benchmarks are, contaminants should be selected on the basis of the number of benchmarks exceeded and the conservatism of the particular benchmark values, as discussed in the text. To the extent that toxicity data are available, this report presents the alternative benchmarks for chemicals that have been detected on the Oak Ridge Reservation. It also presents the data used to calculate benchmarks and the sources of the data. It compares the benchmarks and discusses their relative conservatism and utility.« less
Raising Quality and Achievement. A College Guide to Benchmarking.
ERIC Educational Resources Information Center
Owen, Jane
This booklet introduces the principles and practices of benchmarking as a way of raising quality and achievement at further education colleges in Britain. Section 1 defines the concept of benchmarking. Section 2 explains what benchmarking is not and the steps that should be taken before benchmarking is initiated. The following aspects and…
Benchmarking in Education: Tech Prep, a Case in Point. IEE Brief Number 8.
ERIC Educational Resources Information Center
Inger, Morton
Benchmarking is a process by which organizations compare their practices, processes, and outcomes to standards of excellence in a systematic way. The benchmarking process entails the following essential steps: determining what to benchmark and establishing internal baseline data; identifying the benchmark; determining how that standard has been…
Benchmarks: The Development of a New Approach to Student Evaluation.
ERIC Educational Resources Information Center
Larter, Sylvia
The Toronto Board of Education Benchmarks are libraries of reference materials that demonstrate student achievement at various levels. Each library contains video benchmarks, print benchmarks, a staff handbook, and summary and introductory documents. This book is about the development and the history of the benchmark program. It has taken over 3…
Benchmarking in Thoracic Surgery. Third Edition.
Freixinet Gilart, Jorge; Varela Simó, Gonzalo; Rodríguez Suárez, Pedro; Embún Flor, Raúl; Rivas de Andrés, Juan José; de la Torre Bravos, Mercedes; Molins López-Rodó, Laureano; Pac Ferrer, Joaquín; Izquierdo Elena, José Miguel; Baschwitz, Benno; López de Castro, Pedro E; Fibla Alfara, Juan José; Hernando Trancho, Florentino; Carvajal Carrasco, Ángel; Canalís Arrayás, Emili; Salvatierra Velázquez, Ángel; Canela Cardona, Mercedes; Torres Lanzas, Juan; Moreno Mata, Nicolás
2016-04-01
Benchmarking entails continuous comparison of efficacy and quality among products and activities, with the primary objective of achieving excellence. To analyze the results of benchmarking performed in 2013 on clinical practices undertaken in 2012 in 17 Spanish thoracic surgery units. Study data were obtained from the basic minimum data set for hospitalization, registered in 2012. Data from hospital discharge reports were submitted by the participating groups, but staff from the corresponding departments did not intervene in data collection. Study cases all involved hospital discharges recorded in the participating sites. Episodes included were respiratory surgery (Major Diagnostic Category 04, Surgery), and those of the thoracic surgery unit. Cases were labelled using codes from the International Classification of Diseases, 9th revision, Clinical Modification. The refined diagnosis-related groups classification was used to evaluate differences in severity and complexity of cases. General parameters (number of cases, mean stay, complications, readmissions, mortality, and activity) varied widely among the participating groups. Specific interventions (lobectomy, pneumonectomy, atypical resections, and treatment of pneumothorax) also varied widely. As in previous editions, practices among participating groups varied considerably. Some areas for improvement emerge: admission processes need to be standardized to avoid urgent admissions and to improve pre-operative care; hospital discharges should be streamlined and discharge reports improved by including all procedures and complications. Some units have parameters which deviate excessively from the norm, and these sites need to review their processes in depth. Coding of diagnoses and comorbidities is another area where improvement is needed. Copyright © 2015 SEPAR. Published by Elsevier Espana. All rights reserved.
Sacks, David; Black, Carl M; Cognard, Christophe; Connors, John J; Frei, Donald; Gupta, Rishi; Jovin, Tudor G; Kluck, Bryan; Meyers, Philip M; Murphy, Kieran J; Ramee, Stephen; Rüfenacht, Daniel A; Bernadette Stallmeyer, M J; Vorwerk, Dierk
2013-02-01
In this international multispecialty document, quality benchmarks for processes of care and clinical outcomes are defined. It is intended that these benchmarks be used in a quality assurance program to assess and improve processes and outcomes in acute stroke revascularization. Members of the writing group were appointed by the American Society of Neuroradiology, Canadian Interventional Radiology Association, Cardiovascular and Interventional Radiological Society of Europe, Society of Cardiac Angiography and Interventions, Society of Interventional Radiology, Society of NeuroInterventional Surgery, European Society of Minimally Invasive Neurological Therapy, and Society of Vascular and Interventional Neurology. The writing group reviewed the relevant literature from 1986 through February 2012 to create an evidence table summarizing processes and outcomes of care. Performance metrics and thresholds were then created by consensus. The guideline was approved by the sponsoring societies. It is intended that this guideline be fully updated in 3 years. In this international multispecialty document, quality benchmarks for processes of care and clinical outcomes are defined. These include process measures of time to imaging, arterial puncture, and revascularization and measures of clinical outcome up to 90 days. Quality improvement guidelines are provided for endovascular acute ischemic stroke revascularization procedures. Copyright © 2013 SIR. Published by Elsevier Inc. All rights reserved.
Internationalization of pediatric sleep apnea research.
Milkov, Mario
2012-02-01
Recently, the socio-medical importance of obstructive sleep apnea in infancy and childhood increases worldwide. The present investigation aims at analyzing the dynamic science internationalization in this narrow field as reflected in three data-bases and at outlining the most significant scientists, institutions and primary information sources. A scientometric study of data from a retrospective problem-oriented search on pediatric sleep apnea in three data-bases such as Web of Science, MEDLINE and Scopus was carried out. A set of parameters of publication output and citations was followed-up. Several scientometric distributions were created and enabled the identification of some essential peculiarities of the international scientific communications. There was a steady world publication output increase. In 1972-2010, 4192 publications from 874 journals were abstracted in MEDLINE. In 1985-2010, more than 8100 authors from 64 countries published 3213 papers in 626 journals and 256 conference proceedings abstracted in Web of Science. In 1973-2010, 152 authors published 687 papers in 144 journals in 19 languages abstracted in Scopus. USA authors dominated followed by those from Australia and Canada. Sleep, Int. J. Pediatr. Otorhinolaryngol., Pediatr. Pulmonol. and Pediatrics belonged to 'core' journals concerning Web of Science and MEDLINE while Arch. Dis. Childh. and Eur. Respir. J. dominated in Scopus. Nine journals being currently published in 5 countries contained the terms of 'sleep' or 'sleeping' in their titles. David Gozal, Carole L. Marcus and Christian Guilleminault presented with most publications and citations to them. W.H. Dietz' paper published in Pediatrics in 1998 received 764 citations. Eighty-four authors from 11 countries participated in 16 scientific events held in 12 countries which were immediately devoted to sleep research. Their 13 articles were cited 170 times in Web of Science. Authors from the University of Louisville, Stanford University, and University of Pennsylvania published most papers on pediatric sleep apnea abstracted in these data-bases. The newly created data-base with the researchers' names, addresses and publications could be used by scientists from smaller countries for further improvement of their international collaboration. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Scientometric trends and knowledge maps of global health systems research.
Yao, Qiang; Chen, Kai; Yao, Lan; Lyu, Peng-hui; Yang, Tian-an; Luo, Fei; Chen, Shan-quan; He, Lu-yang; Liu, Zhi-yong
2014-06-05
In the last few decades, health systems research (HSR) has garnered much attention with a rapid increase in the related literature. This study aims to review and evaluate the global progress in HSR and assess the current quantitative trends. Based on data from the Web of Science database, scientometric methods and knowledge visualization techniques were applied to evaluate global scientific production and develop trends of HSR from 1900 to 2012. HSR has increased rapidly over the past 20 years. Currently, there are 28,787 research articles published in 3,674 journals that are listed in 140 Web of Science subject categories. The research in this field has mainly focused on public, environmental and occupational health (6,178, 21.46%), health care sciences and services (5,840, 20.29%), and general and internal medicine (3,783, 13.14%). The top 10 journals had published 2,969 (10.31%) articles and received 5,229 local citations and 40,271 global citations. The top 20 authors together contributed 628 papers, which accounted for a 2.18% share in the cumulative worldwide publications. The most productive author was McKee, from the London School of Hygiene & Tropical Medicine, with 48 articles. In addition, USA and American institutions ranked the first in health system research productivity, with high citation times, followed by the UK and Canada. HSR is an interdisciplinary area. Organization for Economic Co-operation and Development countries showed they are the leading nations in HSR. Meanwhile, American and Canadian institutions and the World Health Organization play a dominant role in the production, collaboration, and citation of high quality articles. Moreover, health policy and analysis research, health systems and sub-systems research, healthcare and services research, health, epidemiology and economics of communicable and non-communicable diseases, primary care research, health economics and health costs, and pharmacy of hospital have been identified as the mainstream topics in HSR fields. These findings will provide evidence of the current status and trends in HSR all over the world, as well as clues to the impact of this popular topic; thus, helping scientific researchers and policy makers understand the panorama of HSR and predict the dynamic directions of research.
HS06 Benchmark for an ARM Server
NASA Astrophysics Data System (ADS)
Kluth, Stefan
2014-06-01
We benchmarked an ARM cortex-A9 based server system with a four-core CPU running at 1.1 GHz. The system used Ubuntu 12.04 as operating system and the HEPSPEC 2006 (HS06) benchmarking suite was compiled natively with gcc-4.4 on the system. The benchmark was run for various settings of the relevant gcc compiler options. We did not find significant influence from the compiler options on the benchmark result. The final HS06 benchmark result is 10.4.
PMLB: a large benchmark suite for machine learning evaluation and comparison.
Olson, Randal S; La Cava, William; Orzechowski, Patryk; Urbanowicz, Ryan J; Moore, Jason H
2017-01-01
The selection, development, or comparison of machine learning methods in data mining can be a difficult task based on the target problem and goals of a particular study. Numerous publicly available real-world and simulated benchmark datasets have emerged from different sources, but their organization and adoption as standards have been inconsistent. As such, selecting and curating specific benchmarks remains an unnecessary burden on machine learning practitioners and data scientists. The present study introduces an accessible, curated, and developing public benchmark resource to facilitate identification of the strengths and weaknesses of different machine learning methodologies. We compare meta-features among the current set of benchmark datasets in this resource to characterize the diversity of available data. Finally, we apply a number of established machine learning methods to the entire benchmark suite and analyze how datasets and algorithms cluster in terms of performance. From this study, we find that existing benchmarks lack the diversity to properly benchmark machine learning algorithms, and there are several gaps in benchmarking problems that still need to be considered. This work represents another important step towards understanding the limitations of popular benchmarking suites and developing a resource that connects existing benchmarking standards to more diverse and efficient standards in the future.
The General Concept of Benchmarking and Its Application in Higher Education in Europe
ERIC Educational Resources Information Center
Nazarko, Joanicjusz; Kuzmicz, Katarzyna Anna; Szubzda-Prutis, Elzbieta; Urban, Joanna
2009-01-01
The purposes of this paper are twofold: a presentation of the theoretical basis of benchmarking and a discussion on practical benchmarking applications. Benchmarking is also analyzed as a productivity accelerator. The authors study benchmarking usage in the private and public sectors with due consideration of the specificities of the two areas.…
Time-dependent density functional theory description of total photoabsorption cross sections
NASA Astrophysics Data System (ADS)
Tenorio, Bruno Nunes Cabral; Nascimento, Marco Antonio Chaer; Rocha, Alexandre Braga
2018-02-01
The time-dependent version of the density functional theory (TDDFT) has been used to calculate the total photoabsorption cross section of a number of molecules, namely, benzene, pyridine, furan, pyrrole, thiophene, phenol, naphthalene, and anthracene. The discrete electronic pseudo-spectra, obtained in a L2 basis set calculation were used in an analytic continuation procedure to obtain the photoabsorption cross sections. The ammonia molecule was chosen as a model system to compare the results obtained with TDDFT to those obtained with the linear response coupled cluster approach in order to make a link with our previous work and establish benchmarks.
Transport methods and interactions for space radiations
NASA Technical Reports Server (NTRS)
Wilson, John W.; Townsend, Lawrence W.; Schimmerling, Walter S.; Khandelwal, Govind S.; Khan, Ferdous S.; Nealy, John E.; Cucinotta, Francis A.; Simonsen, Lisa C.; Shinn, Judy L.; Norbury, John W.
1991-01-01
A review of the program in space radiation protection at the Langley Research Center is given. The relevant Boltzmann equations are given with a discussion of approximation procedures for space applications. The interaction coefficients are related to solution of the many-body Schroedinger equation with nuclear and electromagnetic forces. Various solution techniques are discussed to obtain relevant interaction cross sections with extensive comparison with experiments. Solution techniques for the Boltzmann equations are discussed in detail. Transport computer code validation is discussed through analytical benchmarking, comparison with other codes, comparison with laboratory experiments and measurements in space. Applications to lunar and Mars missions are discussed.
Performance of a Lexical and POS Tagger for Sanskrit
NASA Astrophysics Data System (ADS)
Hellwig, Oliver
Due to the phonetic, morphological, and lexical complexity of Sanskrit, the automatic analysis of this language is a real challenge in the area of natural language processing. The paper describes a series of tests that were performed to assess the accuracy of the tagging program SanskritTagger. To our knowlegde, it offers the first reliable benchmark data for evaluating the quality of taggers for Sanskrit using an unrestricted dictionary and texts from different domains. Based on a detailed analysis of the test results, the paper points out possible directions for future improvements of statistical tagging procedures for Sanskrit.
Algorithms for elasto-plastic-creep postbuckling
NASA Technical Reports Server (NTRS)
Padovan, J.; Tovichakchaikul, S.
1984-01-01
This paper considers the development of an improved constrained time stepping scheme which can efficiently and stably handle the pre-post-buckling behavior of general structure subject to high temperature environments. Due to the generality of the scheme, the combined influence of elastic-plastic behavior can be handled in addition to time dependent creep effects. This includes structural problems exhibiting indefinite tangent properties. To illustrate the capability of the procedure, several benchmark problems employing finite element analyses are presented. These demonstrate the numerical efficiency and stability of the scheme. Additionally, the potential influence of complex creep histories on the buckling characteristics is considered.
Benchmarking reference services: an introduction.
Marshall, J G; Buchanan, H S
1995-01-01
Benchmarking is based on the common sense idea that someone else, either inside or outside of libraries, has found a better way of doing certain things and that your own library's performance can be improved by finding out how others do things and adopting the best practices you find. Benchmarking is one of the tools used for achieving continuous improvement in Total Quality Management (TQM) programs. Although benchmarking can be done on an informal basis, TQM puts considerable emphasis on formal data collection and performance measurement. Used to its full potential, benchmarking can provide a common measuring stick to evaluate process performance. This article introduces the general concept of benchmarking, linking it whenever possible to reference services in health sciences libraries. Data collection instruments that have potential application in benchmarking studies are discussed and the need to develop common measurement tools to facilitate benchmarking is emphasized.
Predicting hydration free energies of amphetamine-type stimulants with a customized molecular model
NASA Astrophysics Data System (ADS)
Li, Jipeng; Fu, Jia; Huang, Xing; Lu, Diannan; Wu, Jianzhong
2016-09-01
Amphetamine-type stimulants (ATS) are a group of incitation and psychedelic drugs affecting the central nervous system. Physicochemical data for these compounds are essential for understanding the stimulating mechanism, for assessing their environmental impacts, and for developing new drug detection methods. However, experimental data are scarce due to tight regulation of such illicit drugs, yet conventional methods to estimate their properties are often unreliable. Here we introduce a tailor-made multiscale procedure for predicting the hydration free energies and the solvation structures of ATS molecules by a combination of first principles calculations and the classical density functional theory. We demonstrate that the multiscale procedure performs well for a training set with similar molecular characteristics and yields good agreement with a testing set not used in the training. The theoretical predictions serve as a benchmark for the missing experimental data and, importantly, provide microscopic insights into manipulating the hydrophobicity of ATS compounds by chemical modifications.
Analysis of Wake VAS Benefits Using ACES Build 3.2.1: VAMS Type 1 Assessment
NASA Technical Reports Server (NTRS)
Smith, Jeremy C.
2005-01-01
The FAA and NASA are currently engaged in a Wake Turbulence Research Program to revise wake turbulence separation standards, procedures, and criteria to increase airport capacity while maintaining or increasing safety. The research program is divided into three phases: Phase I near term procedural enhancements; Phase II wind dependent Wake Vortex Advisory System (WakeVAS) Concepts of Operations (ConOps); and Phase III farther term ConOps based on wake prediction and sensing. The Phase III Wake VAS ConOps is one element of the Virtual Airspace Modelling and Simulation (VAMS) program blended concepts for enhancing the total system wide capacity of the National Airspace System (NAS). This report contains a VAMS Program Type 1 (stand-alone) assessment of the expected capacity benefits of Wake VAS at the 35 FAA Benchmark Airports and determines the consequent reduction in delay using the Airspace Concepts Evaluation System (ACES) Build 3.2.1 simulator.
An Integrated Method Based on PSO and EDA for the Max-Cut Problem.
Lin, Geng; Guan, Jian
2016-01-01
The max-cut problem is NP-hard combinatorial optimization problem with many real world applications. In this paper, we propose an integrated method based on particle swarm optimization and estimation of distribution algorithm (PSO-EDA) for solving the max-cut problem. The integrated algorithm overcomes the shortcomings of particle swarm optimization and estimation of distribution algorithm. To enhance the performance of the PSO-EDA, a fast local search procedure is applied. In addition, a path relinking procedure is developed to intensify the search. To evaluate the performance of PSO-EDA, extensive experiments were carried out on two sets of benchmark instances with 800 to 20,000 vertices from the literature. Computational results and comparisons show that PSO-EDA significantly outperforms the existing PSO-based and EDA-based algorithms for the max-cut problem. Compared with other best performing algorithms, PSO-EDA is able to find very competitive results in terms of solution quality.
Efficient fractal-based mutation in evolutionary algorithms from iterated function systems
NASA Astrophysics Data System (ADS)
Salcedo-Sanz, S.; Aybar-Ruíz, A.; Camacho-Gómez, C.; Pereira, E.
2018-03-01
In this paper we present a new mutation procedure for Evolutionary Programming (EP) approaches, based on Iterated Function Systems (IFSs). The new mutation procedure proposed consists of considering a set of IFS which are able to generate fractal structures in a two-dimensional phase space, and use them to modify a current individual of the EP algorithm, instead of using random numbers from different probability density functions. We test this new proposal in a set of benchmark functions for continuous optimization problems. In this case, we compare the proposed mutation against classical Evolutionary Programming approaches, with mutations based on Gaussian, Cauchy and chaotic maps. We also include a discussion on the IFS-based mutation in a real application of Tuned Mass Dumper (TMD) location and optimization for vibration cancellation in buildings. In both practical cases, the proposed EP with the IFS-based mutation obtained extremely competitive results compared to alternative classical mutation operators.
A Multi-Start Evolutionary Local Search for the Two-Echelon Location Routing Problem
NASA Astrophysics Data System (ADS)
Nguyen, Viet-Phuong; Prins, Christian; Prodhon, Caroline
This paper presents a new hybrid metaheuristic between a greedy randomized adaptive search procedure (GRASP) and an evolutionary/iterated local search (ELS/ILS), using Tabu list to solve the two-echelon location routing problem (LRP-2E). The GRASP uses in turn three constructive heuristics followed by local search to generate the initial solutions. From a solution of GRASP, an intensification strategy is carried out by a dynamic alternation between ELS and ILS. In this phase, each child is obtained by mutation and evaluated through a splitting procedure of giant tour followed by a local search. The tabu list, defined by two characteristics of solution (total cost and number of trips), is used to avoid searching a space already explored. The results show that our metaheuristic clearly outperforms all previously published methods on LRP-2E benchmark instances. Furthermore, it is competitive with the best meta-heuristic published for the single-echelon LRP.
Development of a Hybrid RANS/LES Method for Compressible Mixing Layer Simulations
NASA Technical Reports Server (NTRS)
Georgiadis, Nicholas J.; Alexander, J. Iwan D.; Reshotko, Eli
2001-01-01
A hybrid method has been developed for simulations of compressible turbulent mixing layers. Such mixing layers dominate the flows in exhaust systems of modem day aircraft and also those of hypersonic vehicles currently under development. The hybrid method uses a Reynolds-averaged Navier-Stokes (RANS) procedure to calculate wall bounded regions entering a mixing section, and a Large Eddy Simulation (LES) procedure to calculate the mixing dominated regions. A numerical technique was developed to enable the use of the hybrid RANS/LES method on stretched, non-Cartesian grids. The hybrid RANS/LES method is applied to a benchmark compressible mixing layer experiment. Preliminary two-dimensional calculations are used to investigate the effects of axial grid density and boundary conditions. Actual LES calculations, performed in three spatial directions, indicated an initial vortex shedding followed by rapid transition to turbulence, which is in agreement with experimental observations.
Probabilistic seismic loss estimation via endurance time method
NASA Astrophysics Data System (ADS)
Tafakori, Ehsan; Pourzeynali, Saeid; Estekanchi, Homayoon E.
2017-01-01
Probabilistic Seismic Loss Estimation is a methodology used as a quantitative and explicit expression of the performance of buildings using terms that address the interests of both owners and insurance companies. Applying the ATC 58 approach for seismic loss assessment of buildings requires using Incremental Dynamic Analysis (IDA), which needs hundreds of time-consuming analyses, which in turn hinders its wide application. The Endurance Time Method (ETM) is proposed herein as part of a demand propagation prediction procedure and is shown to be an economical alternative to IDA. Various scenarios were considered to achieve this purpose and their appropriateness has been evaluated using statistical methods. The most precise and efficient scenario was validated through comparison against IDA driven response predictions of 34 code conforming benchmark structures and was proven to be sufficiently precise while offering a great deal of efficiency. The loss values were estimated by replacing IDA with the proposed ETM-based procedure in the ATC 58 procedure and it was found that these values suffer from varying inaccuracies, which were attributed to the discretized nature of damage and loss prediction functions provided by ATC 58.
Building thermography as a tool in energy audits and building commissioning procedure
NASA Astrophysics Data System (ADS)
Kauppinen, Timo
2007-04-01
A Building Commissioning-project (ToVa) was launched in Finland in the year 2003. A comprehensive commissioning procedure, including the building process and operation stage was developed in the project. This procedure will confirm the precise documentation of client's goals, definition of planning goals and the performance of the building. It is rather usual, that within 1-2 years after introduction the users complain about the defects or performance malfunctions of the building. Thermography is one important manual tool in verifying the thermal performance of the building envelope. In this paper the results of one pilot building (a school) will be presented. In surveying the condition and energy efficiency of buildings, various auxiliary means are needed. We can compare the consumption data of the target building with other, same type of buildings by benchmarking. Energy audit helps to localize and determine the energy saving potential. The most general and also most effective auxiliary means in monitoring the thermal performance of building envelopes is an infrared camera. In this presentation some examples of the use of thermography in energy audits are presented.
Taking the Battle Upstream: Towards a Benchmarking Role for NATO
2012-09-01
Benchmark.........................................................................................14 Figure 8. World Bank Benchmarking Work on Quality...Search of a Benchmarking Theory for the Public Sector.” 16 Figure 8. World Bank Benchmarking Work on Quality of Governance One of the most...the Ministries of Defense in the countries in which it works ). Another interesting innovation is that for comparison purposes, McKinsey categorized
ERIC Educational Resources Information Center
Kent State Univ., OH. Ohio Literacy Resource Center.
This document is intended to show the relationship between Ohio's Standards and Competencies, Equipped for the Future's (EFF's) Standards and Components of Performance, and Ohio's Revised Benchmarks. The document is divided into three parts, with Part 1 covering mathematics instruction, Part 2 covering reading instruction, and Part 3 covering…
How do I know if my forecasts are better? Using benchmarks in hydrological ensemble prediction
NASA Astrophysics Data System (ADS)
Pappenberger, F.; Ramos, M. H.; Cloke, H. L.; Wetterhall, F.; Alfieri, L.; Bogner, K.; Mueller, A.; Salamon, P.
2015-03-01
The skill of a forecast can be assessed by comparing the relative proximity of both the forecast and a benchmark to the observations. Example benchmarks include climatology or a naïve forecast. Hydrological ensemble prediction systems (HEPS) are currently transforming the hydrological forecasting environment but in this new field there is little information to guide researchers and operational forecasters on how benchmarks can be best used to evaluate their probabilistic forecasts. In this study, it is identified that the forecast skill calculated can vary depending on the benchmark selected and that the selection of a benchmark for determining forecasting system skill is sensitive to a number of hydrological and system factors. A benchmark intercomparison experiment is then undertaken using the continuous ranked probability score (CRPS), a reference forecasting system and a suite of 23 different methods to derive benchmarks. The benchmarks are assessed within the operational set-up of the European Flood Awareness System (EFAS) to determine those that are 'toughest to beat' and so give the most robust discrimination of forecast skill, particularly for the spatial average fields that EFAS relies upon. Evaluating against an observed discharge proxy the benchmark that has most utility for EFAS and avoids the most naïve skill across different hydrological situations is found to be meteorological persistency. This benchmark uses the latest meteorological observations of precipitation and temperature to drive the hydrological model. Hydrological long term average benchmarks, which are currently used in EFAS, are very easily beaten by the forecasting system and the use of these produces much naïve skill. When decomposed into seasons, the advanced meteorological benchmarks, which make use of meteorological observations from the past 20 years at the same calendar date, have the most skill discrimination. They are also good at discriminating skill in low flows and for all catchment sizes. Simpler meteorological benchmarks are particularly useful for high flows. Recommendations for EFAS are to move to routine use of meteorological persistency, an advanced meteorological benchmark and a simple meteorological benchmark in order to provide a robust evaluation of forecast skill. This work provides the first comprehensive evidence on how benchmarks can be used in evaluation of skill in probabilistic hydrological forecasts and which benchmarks are most useful for skill discrimination and avoidance of naïve skill in a large scale HEPS. It is recommended that all HEPS use the evidence and methodology provided here to evaluate which benchmarks to employ; so forecasters can have trust in their skill evaluation and will have confidence that their forecasts are indeed better.
Bess, John D.; Fujimoto, Nozomu
2014-10-09
Benchmark models were developed to evaluate six cold-critical and two warm-critical, zero-power measurements of the HTTR. Additional measurements of a fully-loaded subcritical configuration, core excess reactivity, shutdown margins, six isothermal temperature coefficients, and axial reaction-rate distributions were also evaluated as acceptable benchmark experiments. Insufficient information is publicly available to develop finely-detailed models of the HTTR as much of the design information is still proprietary. However, the uncertainties in the benchmark models are judged to be of sufficient magnitude to encompass any biases and bias uncertainties incurred through the simplification process used to develop the benchmark models. Dominant uncertainties in themore » experimental keff for all core configurations come from uncertainties in the impurity content of the various graphite blocks that comprise the HTTR. Monte Carlo calculations of keff are between approximately 0.9 % and 2.7 % greater than the benchmark values. Reevaluation of the HTTR models as additional information becomes available could improve the quality of this benchmark and possibly reduce the computational biases. High-quality characterization of graphite impurities would significantly improve the quality of the HTTR benchmark assessment. Simulation of the other reactor physics measurements are in good agreement with the benchmark experiment values. The complete benchmark evaluation details are available in the 2014 edition of the International Handbook of Evaluated Reactor Physics Benchmark Experiments.« less
Bertzbach, F; Franz, T; Möller, K
2012-01-01
This paper shows the results of performance improvement, which have been achieved in benchmarking projects in the wastewater industry in Germany over the last 15 years. A huge number of changes in operational practice and also in achieved annual savings can be shown, induced in particular by benchmarking at process level. Investigation of this question produces some general findings for the inclusion of performance improvement in a benchmarking project and for the communication of its results. Thus, we elaborate on the concept of benchmarking at both utility and process level, which is still a necessary distinction for the integration of performance improvement into our benchmarking approach. To achieve performance improvement via benchmarking it should be made quite clear that this outcome depends, on one hand, on a well conducted benchmarking programme and, on the other, on the individual situation within each participating utility.
Benchmarking clinical photography services in the NHS.
Arbon, Giles
2015-01-01
Benchmarking is used in services across the National Health Service (NHS) using various benchmarking programs. Clinical photography services do not have a program in place and services have to rely on ad hoc surveys of other services. A trial benchmarking exercise was undertaken with 13 services in NHS Trusts. This highlights valuable data and comparisons that can be used to benchmark and improve services throughout the profession.
A Seafloor Benchmark for 3-dimensional Geodesy
NASA Astrophysics Data System (ADS)
Chadwell, C. D.; Webb, S. C.; Nooner, S. L.
2014-12-01
We have developed an inexpensive, permanent seafloor benchmark to increase the longevity of seafloor geodetic measurements. The benchmark provides a physical tie to the sea floor lasting for decades (perhaps longer) on which geodetic sensors can be repeatedly placed and removed with millimeter resolution. Global coordinates estimated with seafloor geodetic techniques will remain attached to the benchmark allowing for the interchange of sensors as they fail or become obsolete, or for the sensors to be removed and used elsewhere, all the while maintaining a coherent series of positions referenced to the benchmark. The benchmark has been designed to free fall from the sea surface with transponders attached. The transponder can be recalled via an acoustic command sent from the surface to release from the benchmark and freely float to the sea surface for recovery. The duration of the sensor attachment to the benchmark will last from a few days to a few years depending on the specific needs of the experiment. The recovered sensors are then available to be reused at other locations, or again at the same site in the future. Three pins on the sensor frame mate precisely and unambiguously with three grooves on the benchmark. To reoccupy a benchmark a Remotely Operated Vehicle (ROV) uses its manipulator arm to place the sensor pins into the benchmark grooves. In June 2014 we deployed four benchmarks offshore central Oregon. We used the ROV Jason to successfully demonstrate the removal and replacement of packages onto the benchmark. We will show the benchmark design and its operational capabilities. Presently models of megathrust slip within the Cascadia Subduction Zone (CSZ) are mostly constrained by the sub-aerial GPS vectors from the Plate Boundary Observatory, a part of Earthscope. More long-lived seafloor geodetic measures are needed to better understand the earthquake and tsunami risk associated with a large rupture of the thrust fault within the Cascadia subduction zone. Using a ROV to place and remove sensors on the benchmarks will significantly reduce the number of sensors required by the community to monitor offshore strain in subduction zones.
Hedman, C.W.; Grace, S.L.; King, S.E.
2000-01-01
Longleaf pine (Pinus palustris) ecosystems are characterized by a diverse community of native groundcover species. Critics of plantation forestry claim that loblolly (Pinus taeda) and slash pine (Pinus elliottii) forests are devoid of native groundcover due to associated management practices. As a result of these practices, some believe that ecosystem functions characteristic of longleaf pine are lost under loblolly and slash pine plantation management. Our objective was to quantify and compare vegetation composition and structure of longleaf, loblolly, and slash pine forests of differing ages, management strategies, and land-use histories. Information from this study will further our understanding and lead to inferences about functional differences among pine cover types. Vegetation and environmental data were collected in 49 overstory plots across Southlands Experiment Forest in Bainbridge, GA. Nested plots, i.e. midstory, understory, and herbaceous, were replicated four times within each overstory plot. Over 400 species were identified. Herbaceous species richness was variable for all three pine cover types. Herbaceous richness for longleaf, slash, and loblolly pine averaged 15, 13, and 12 species per m2, respectively. Longleaf pine plots had significantly more (p < 0.029) herbaceous species and greater herbaceous cover (p < 0.001) than loblolly or slash pine plots. Longleaf and slash pine plots were otherwise similar in species richness and stand structure, both having lower overstory density, midstory density, and midstory cover than loblolly pine plots. Multivariate analyses provided additional perspectives on vegetation patterns. Ordination and classification procedures consistently placed herbaceous plots into two groups which we refer to as longleaf pine benchmark (34 plots) and non-benchmark (15 plots). Benchmark plots typically contained numerous herbaceous species characteristic of relic longleaf pine/wiregrass communities found in the area. Conversely, non-benchmark plots contained fewer species characteristic of relic longleaf pine/wiregrass communities and more ruderal species common to highly disturbed sites. The benchmark group included 12 naturally regenerated longleaf plots and 22 loblolly, slash, and longleaf pine plantation plots encompassing a broad range of silvicultural disturbances. Non-benchmark plots included eight afforested old-field plantation plots and seven cutover plantation plots. Regardless of overstory species, all afforested old fields were low either in native species richness or in abundance. Varying degrees of this groundcover condition were also found in some cutover plantation plots that were classified as non-benchmark. Environmental variables strongly influencing vegetation patterns included agricultural history and fire frequency. Results suggest that land-use history, particularly related to agriculture, has a greater influence on groundcover composition and structure in southern pine forests than more recent forest management activities or pine cover type. Additional research is needed to identify the potential for afforested old fields to recover native herbaceous species. In the interim, high-yield plantation management should initially target old-field sites which already support reduced numbers of groundcover species. Sites which have not been farmed in the past 50-60 years should be considered for longleaf pine restoration and multiple-use objectives, since they have the greatest potential for supporting diverse native vegetation. (C) 2000 Elsevier Science B.V.
Contemporary results of open aortic arch surgery.
Thomas, Mathew; Li, Zhuo; Cook, David J; Greason, Kevin L; Sundt, Thoralf M
2012-10-01
The success of endovascular therapies for descending thoracic aortic disease has turned attention toward stent graft options for repair of aortic arch aneurysms. Defining the role of such techniques demands understanding of contemporary results of open surgery. The outcomes of open arch procedures performed on a single surgical service from July 1, 2001 to August 30, 2010, were examined as defined per The Society of Thoracic Surgeons national database. During the study period, 209 patients (median age, 65 years; range, 26-88) underwent arch operations, of which 159 were elective procedures. In 65 the entire arch was replaced, 22 of whom had portions of the descending thoracic aorta simultaneously replaced via bilateral thoracosternotomy. Antegrade cerebral perfusion was used in 78 patients and retrograde cerebral perfusion in 1. Operative mortality was 2.5% in elective circumstances and 10% in emergency cases (P = .04). The stroke rate was 5.0% when procedures were performed electively and 11.8% when on an emergency basis (P = .11). Procedure-specific mortality rates were 5.5% for elective and 10% for emergency procedures with total arch replacement, and 1.0% for elective and 10% for emergency procedures with hemiarch replacement. Stratified by extent, neurologic event rates were 5.5% for elective and 10% for emergency procedures with total arch and 4.8% for elective and 12.5% for emergency procedures with hemiarch replacement. Open aortic arch replacement can be performed with low operative mortality and stroke rates, especially in elective circumstances, by a team with particular focus on the procedure. The results of novel endovascular therapies should be benchmarked against contemporary open series performed in such a setting. Copyright © 2012 The American Association for Thoracic Surgery. Published by Mosby, Inc. All rights reserved.
[Regional anaesthesia as advantage in competition between hospitals. Strategic market analysis].
Heller, A R; Bauer, K R; Eberlein-Gonska, M; Albrecht, D M; Koch, T
2009-05-01
The German Social Act V section sign 12 is aimed towards competition, efficiency and quality in healthcare. Because surgical departments are billing standard diagnosis-related group (DRG) case costs to health insurance companies, they claim best value for money for internal services. Thus, anaesthesia concepts are being closely scrutinized. The present analysis was performed to gain economic arguments for the strategic positioning of regional anaesthesia procedures into clinical pathways. Surgical procedures, which in 2005 had a relevant caseload in Germany, were chosen in which regional anaesthesia procedures (alone or in combination with general anaesthesia) could routinely be used. The structure of costs and earnings for hospital services, split by types and centres of cost, as well as by underlying procedures are contained in the annually updated public accessible dataset (DRG browser) of the German Hospital Reimbursement Institute (InEK). For the year 2005 besides own data, national anaesthesia staffing costs are available from the German Society of Anaesthesiology (DGAI). The curve of earnings per DRG can be calculated from the 2005 InEK browser. This curve intersects by the cost curve at the point of national mean length of stay. The cost curve was calculated by process-oriented distribution of cost centres over the length of stay and allows benchmarking within the national competitive environment. For comparison of process times data from our local database were used. While the InEK browser lacks process times, the cost positions 5.1-5.3 (staffing costs anaesthesia) and the national structure adjusted anaesthesia staffing costs 2005 as published by the DGAI, were used to calculate nationwide mean available anaesthesia times which were compared with own process times. Within the portfolio diagram of lengths of stay for each DRG and process times most procedures are located in the economic lower left, in particular those with high case mix (length of stay and anaesthesia times below reimbursement relevant national mean). The driver of increased earnings is shortening length of stay. Our use of regional anaesthesia is 5 to 10-fold higher than national benchmarks and may contribute to our advantageous position in national competition. The annual increases in profit per DRG range between EUR 1,706 and EUR 467,359 and compensate by far the investment of regional anaesthesia derived pain management, besides the advantage of increased patient satisfaction and avoidance of complications. Regional anaesthesia is a considerable value driver in clinical pathways by shortening length of stay. The present analysis further demonstrates that time for regional block performance is covered by anaesthesia reimbursement within the DRG costing schedule.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abercrombie, Robert K; Udoeyop, Akaninyene W; Schlicher, Bob G
This work examines a scientometric model that tracks the emergence of an identified technology from initial discovery (via original scientific and conference literature), through critical discoveries (via original scientific, conference literature and patents), transitioning through Technology Readiness Levels (TRLs) and ultimately on to commercial application. During the period of innovation and technology transfer, the impact of scholarly works, patents and on-line web news sources are identified. As trends develop, currency of citations, collaboration indicators, and on-line news patterns are identified. The combinations of four distinct and separate searchable on-line networked sources (i.e., scholarly publications and citation, patents, news archives, andmore » online mapping networks) are assembled to become one collective network (a dataset for analysis of relations). This established network becomes the basis from which to quickly analyze the temporal flow of activity (searchable events) for the example subject domain we investigated.« less
The Model Averaging for Dichotomous Response Benchmark Dose (MADr-BMD) Tool
Providing quantal response models, which are also used in the U.S. EPA benchmark dose software suite, and generates a model-averaged dose response model to generate benchmark dose and benchmark dose lower bound estimates.
Benchmarking--Measuring and Comparing for Continuous Improvement.
ERIC Educational Resources Information Center
Henczel, Sue
2002-01-01
Discussion of benchmarking focuses on the use of internal and external benchmarking by special librarians. Highlights include defining types of benchmarking; historical development; benefits, including efficiency, improved performance, increased competitiveness, and better decision making; problems, including inappropriate adaptation; developing a…
Franco, José G; Petersen, Claudia G; Mauri, Ana L; Vagnini, Laura D; Renzi, Adriana; Petersen, Bruna; Mattila, M C; Comar, Vanessa A; Ricci, Juliana; Dieamant, Felipe; Oliveira, João Batista A; Baruffi, Ricardo L R
2017-06-01
KPIs have been employed for internal quality control (IQC) in ART. However, clinical KPIs (C-KPIs) such as age, AMH and number of oocytes collected are never added to laboratory KPIs (L-KPIs), such as fertilization rate and morphological quality of the embryos for analysis, even though the final endpoint is the evaluation of clinical pregnancy rates. This paper analyzed if a KPIs-score strategy with clinical and laboratorial parameters could be used to establish benchmarks for IQC in ART cycles. In this prospective cohort study, 280 patients (36.4±4.3years) underwent ART. The total KPIs-score was obtained by the analysis of age, AMH (AMH Gen II ELISA/pre-mixing modified, Beckman Coulter Inc.), number of metaphase-II oocytes, fertilization rates and morphological quality of the embryonic lot. The total KPIs-score (C-KPIs+L-KPIs) was correlated with the presence or absence of clinical pregnancy. The relationship between the C-KPIs and L-KPIs scores was analyzed to establish quality standards, to increase the performance of clinical and laboratorial processes in ART. The logistic regression model (LRM), with respect to pregnancy and total KPIs-score (280 patients/102 clinical pregnancies), yielded an odds ratio of 1.24 (95%CI = 1.16-1.32). There was also a significant difference (p<0.0001) with respect to the total KPIs-score mean value between the group of patients with clinical pregnancies (total KPIs-score=20.4±3.7) and the group without clinical pregnancies (total KPIs-score=15.9±5). Clinical pregnancy probabilities (CPP) can be obtained using the LRM (prediction key) with the total KPIs-score as a predictor variable. The mean C-KPIs and L-KPIs scores obtained in the pregnancy group were 11.9±2.9 and 8.5±1.7, respectively. Routinely, in all cases where the C-KPIs score was ≥9, after the procedure, the L-KPIs score obtained was ≤6, a revision of the laboratory procedure was performed to assess quality standards. This total KPIs-score could set up benchmarks for clinical pregnancy. Moreover, IQC can use C-KPIs and L-KPIs scores to detect problems in the clinical-laboratorial interface.
Developing Benchmarks for Solar Radio Bursts
NASA Astrophysics Data System (ADS)
Biesecker, D. A.; White, S. M.; Gopalswamy, N.; Black, C.; Domm, P.; Love, J. J.; Pierson, J.
2016-12-01
Solar radio bursts can interfere with radar, communication, and tracking signals. In severe cases, radio bursts can inhibit the successful use of radio communications and disrupt a wide range of systems that are reliant on Position, Navigation, and Timing services on timescales ranging from minutes to hours across wide areas on the dayside of Earth. The White House's Space Weather Action Plan has asked for solar radio burst intensity benchmarks for an event occurrence frequency of 1 in 100 years and also a theoretical maximum intensity benchmark. The solar radio benchmark team was also asked to define the wavelength/frequency bands of interest. The benchmark team developed preliminary (phase 1) benchmarks for the VHF (30-300 MHz), UHF (300-3000 MHz), GPS (1176-1602 MHz), F10.7 (2800 MHz), and Microwave (4000-20000) bands. The preliminary benchmarks were derived based on previously published work. Limitations in the published work will be addressed in phase 2 of the benchmark process. In addition, deriving theoretical maxima requires additional work, where it is even possible to, in order to meet the Action Plan objectives. In this presentation, we will present the phase 1 benchmarks and the basis used to derive them. We will also present the work that needs to be done in order to complete the final, or phase 2 benchmarks.
Benchmarking in national health service procurement in Scotland.
Walker, Scott; Masson, Ron; Telford, Ronnie; White, David
2007-11-01
The paper reports the results of a study on benchmarking activities undertaken by the procurement organization within the National Health Service (NHS) in Scotland, namely National Procurement (previously Scottish Healthcare Supplies Contracts Branch). NHS performance is of course politically important, and benchmarking is increasingly seen as a means to improve performance, so the study was carried out to determine if the current benchmarking approaches could be enhanced. A review of the benchmarking activities used by the private sector, local government and NHS organizations was carried out to establish a framework of the motivations, benefits, problems and costs associated with benchmarking. This framework was used to carry out the research through case studies and a questionnaire survey of NHS procurement organizations both in Scotland and other parts of the UK. Nine of the 16 Scottish Health Boards surveyed reported carrying out benchmarking during the last three years. The findings of the research were that there were similarities in approaches between local government and NHS Scotland Health, but differences between NHS Scotland and other UK NHS procurement organizations. Benefits were seen as significant and it was recommended that National Procurement should pursue the formation of a benchmarking group with members drawn from NHS Scotland and external benchmarking bodies to establish measures to be used in benchmarking across the whole of NHS Scotland.
Blecher, Evan
2010-08-01
To investigate the appropriateness of tax incidence (the percentage of the retail price occupied by taxes) benchmarking in low-income and-middle-income countries (LMICs) with rapidly growing economies and to explore the viability of an alternative tax policy rule based on the affordability of cigarettes. The paper outlines criticisms of tax incidence benchmarking, particularly in the context of LMICs. It then considers an affordability-based benchmark using relative income price (RIP) as a measure of affordability. The RIP measures the percentage of annual per capita GDP required to purchase 100 packs of cigarettes. Using South Africa as a case study of an LMIC, future consumption is simulated using both tax incidence benchmarks and affordability benchmarks. I show that a tax incidence benchmark is not an optimal policy tool in South Africa and that an affordability benchmark could be a more effective means of reducing tobacco consumption in the future. Although a tax incidence benchmark was successful in increasing prices and reducing tobacco consumption in South Africa in the past, this approach has drawbacks, particularly in the context of a rapidly growing LMIC economy. An affordability benchmark represents an appropriate alternative that would be more effective in reducing future cigarette consumption.
Liu, Guang-Hui; Shen, Hong-Bin; Yu, Dong-Jun
2016-04-01
Accurately predicting protein-protein interaction sites (PPIs) is currently a hot topic because it has been demonstrated to be very useful for understanding disease mechanisms and designing drugs. Machine-learning-based computational approaches have been broadly utilized and demonstrated to be useful for PPI prediction. However, directly applying traditional machine learning algorithms, which often assume that samples in different classes are balanced, often leads to poor performance because of the severe class imbalance that exists in the PPI prediction problem. In this study, we propose a novel method for improving PPI prediction performance by relieving the severity of class imbalance using a data-cleaning procedure and reducing predicted false positives with a post-filtering procedure: First, a machine-learning-based data-cleaning procedure is applied to remove those marginal targets, which may potentially have a negative effect on training a model with a clear classification boundary, from the majority samples to relieve the severity of class imbalance in the original training dataset; then, a prediction model is trained on the cleaned dataset; finally, an effective post-filtering procedure is further used to reduce potential false positive predictions. Stringent cross-validation and independent validation tests on benchmark datasets demonstrated the efficacy of the proposed method, which exhibits highly competitive performance compared with existing state-of-the-art sequence-based PPIs predictors and should supplement existing PPI prediction methods.
Predicting drug-target interactions by dual-network integrated logistic matrix factorization
NASA Astrophysics Data System (ADS)
Hao, Ming; Bryant, Stephen H.; Wang, Yanli
2017-01-01
In this work, we propose a dual-network integrated logistic matrix factorization (DNILMF) algorithm to predict potential drug-target interactions (DTI). The prediction procedure consists of four steps: (1) inferring new drug/target profiles and constructing profile kernel matrix; (2) diffusing drug profile kernel matrix with drug structure kernel matrix; (3) diffusing target profile kernel matrix with target sequence kernel matrix; and (4) building DNILMF model and smoothing new drug/target predictions based on their neighbors. We compare our algorithm with the state-of-the-art method based on the benchmark dataset. Results indicate that the DNILMF algorithm outperforms the previously reported approaches in terms of AUPR (area under precision-recall curve) and AUC (area under curve of receiver operating characteristic) based on the 5 trials of 10-fold cross-validation. We conclude that the performance improvement depends on not only the proposed objective function, but also the used nonlinear diffusion technique which is important but under studied in the DTI prediction field. In addition, we also compile a new DTI dataset for increasing the diversity of currently available benchmark datasets. The top prediction results for the new dataset are confirmed by experimental studies or supported by other computational research.
Tracking the emergence of synthetic biology.
Shapira, Philip; Kwon, Seokbeom; Youtie, Jan
2017-01-01
Synthetic biology is an emerging domain that combines biological and engineering concepts and which has seen rapid growth in research, innovation, and policy interest in recent years. This paper contributes to efforts to delineate this emerging domain by presenting a newly constructed bibliometric definition of synthetic biology. Our approach is dimensioned from a core set of papers in synthetic biology, using procedures to obtain benchmark synthetic biology publication records, extract keywords from these benchmark records, and refine the keywords, supplemented with articles published in dedicated synthetic biology journals. We compare our search strategy with other recent bibliometric approaches to define synthetic biology, using a common source of publication data for the period from 2000 to 2015. The paper details the rapid growth and international spread of research in synthetic biology in recent years, demonstrates that diverse research disciplines are contributing to the multidisciplinary development of synthetic biology research, and visualizes this by profiling synthetic biology research on the map of science. We further show the roles of a relatively concentrated set of research sponsors in funding the growth and trajectories of synthetic biology. In addition to discussing these analyses, the paper notes limitations and suggests lines for further work.
Hogan, Bridget; Keating, Matthew; Chambers, Neil A; von Ungern-Sternberg, Britta
2016-05-01
There are no internationally accepted guidelines about what constitutes adequate clinical exposure during pediatric anesthetic training. In Australia, no data have been published on the level of experience obtained by anesthetic trainees in pediatric anesthesia. There is, however, a new ANZCA (Australian and New Zealand College of Anaesthetists) curriculum that quantifies new training requirements. To quantify our trainees' exposure to clinical work in order to assess compliance with new curriculum and to provide other institutions with a benchmark for pediatric anesthetic training. We performed a prospective audit to estimate and quantify our anesthetic registrars' exposure to pediatric anesthesia during their 6-month rotation at our institution, a tertiary pediatric hospital in Perth, Western Australia. Our data suggest that trainees at our institution will achieve the new ANZCA training standards comfortably, in terms of the required volume and breadth of exposure. Experience, however, of some advanced pediatric anesthetic procedures appears limited. Experience gained at our hospital easily meets the new College requirements. Experience of fiber-optic intubation and regional blocks would appear insufficient to develop sufficient skills or confidence. The study provides other institutions with information to benchmark against their own trainee experience. © 2016 John Wiley & Sons Ltd.
Omori, Satoshi; Kitao, Akio
2013-06-01
We propose a fast clustering and reranking method, CyClus, for protein-protein docking decoys. This method enables comprehensive clustering of whole decoys generated by rigid-body docking using cylindrical approximation of the protein-proteininterface and hierarchical clustering procedures. We demonstrate the clustering and reranking of 54,000 decoy structures generated by ZDOCK for each complex within a few minutes. After parameter tuning for the test set in ZDOCK benchmark 2.0 with the ZDOCK and ZRANK scoring functions, blind tests for the incremental data in ZDOCK benchmark 3.0 and 4.0 were conducted. CyClus successfully generated smaller subsets of decoys containing near-native decoys. For example, the number of decoys required to create subsets containing near-native decoys with 80% probability was reduced from 22% to 50% of the number required in the original ZDOCK. Although specific ZDOCK and ZRANK results were demonstrated, the CyClus algorithm was designed to be more general and can be applied to a wide range of decoys and scoring functions by adjusting just two parameters, p and T. CyClus results were also compared to those from ClusPro. Copyright © 2013 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Chen, Feng; Xu, Ai-Guo; Zhang, Guang-Cai; Gan, Yan-Biao; Cheng, Tao; Li, Ying-Jun
2009-10-01
We present a highly efficient lattice Boltzmann model for simulating compressible flows. This model is based on the combination of an appropriate finite difference scheme, a 16-discrete-velocity model [Kataoka and Tsutahara, Phys. Rev. E 69 (2004) 035701(R)] and reasonable dispersion and dissipation terms. The dispersion term effectively reduces the oscillation at the discontinuity and enhances numerical precision. The dissipation term makes the new model more easily meet with the von Neumann stability condition. This model works for both high-speed and low-speed flows with arbitrary specific-heat-ratio. With the new model simulation results for the well-known benchmark problems get a high accuracy compared with the analytic or experimental ones. The used benchmark tests include (i) Shock tubes such as the Sod, Lax, Sjogreen, Colella explosion wave, and collision of two strong shocks, (ii) Regular and Mach shock reflections, and (iii) Shock wave reaction on cylindrical bubble problems. With a more realistic equation of state or free-energy functional, the new model has the potential tostudy the complex procedure of shock wave reaction on porous materials.
[Benchmarking in ambulatory care practices--The European Practice Assessment (EPA)].
Szecsenyi, Joachim; Broge, Björn; Willms, Sara; Brodowski, Marc; Götz, Katja
2011-01-01
The European Practice Assessment (EPA) is a comprehensive quality management which consists of 220 indicators covering 5 domains (infrastructure, people, information, finance, and quality and safety). The aim of the project presented was to evaluate EPA as an instrument for benchmarking in ambulatory care practices. A before-and-after design with a comparison group was chosen. One hundred and two practices conducted EPA at baseline (t1) and at the 3-year follow-up (t2). A further 209 practices began EPA at t2 (comparison group). Since both practice groups differed in several variables (age of GP, location and size of practice), a matched-pair design based on propensity scores was applied leading to a subgroup of 102 comparable practices (out of the 209 practices). Data analysis was carried out using Z scores of the EPA domains. The results showed significant improvements in all domains between t1 and t2 as well as between the comparison group and t2. Furthermore, the results demonstrate that the implementation of total quality management and the re-assessment of the EPA procedure can lead to significant improvements in almost all domains. Copyright © 2011. Published by Elsevier GmbH.
Costentin, Cyrille; Savéant, Jean-Michel
2017-06-21
Modern energy challenges currently trigger an intense interest in catalysis of redox reactions-electrochemical and photochemical-particularly those involving small molecules such as water, hydrogen, oxygen, proton, carbon dioxide. A continuously increasing number of molecular catalysts of these reactions, mostly transition metal complexes, have been proposed, rendering necessary procedures for their rational benchmarking and fueling the quest for leading principles that could inspire the design of improved catalysts. The search of "volcano plots" correlating catalysis kinetics to the stability of the key intermediate is a popular approach to the question in catalysis by surface-active sites, with as foremost example the electrochemical reduction of aqueous proton on metal surfaces. We discussed here for the first time, on theoretical and experimental grounds, the pertinence of such an approach in the field of molecular catalysis. This is the occasion to insist on the virtue of careful mechanism assignments. Particular emphasis is put on the interest of expressing the catalysts' intrinsic kinetic properties by means of catalytic Tafel plots, which relate kinetics and overpotential. We also underscore that the principle and strategies put forward for the catalytic activation of the above-mentioned small molecules are general as illustrated by catalytic applications out of this particular field.
Protocol for a national blood transfusion data warehouse from donor to recipient
van Hoeven, Loan R; Hooftman, Babette H; Janssen, Mart P; de Bruijne, Martine C; de Vooght, Karen M K; Kemper, Peter; Koopman, Maria M W
2016-01-01
Introduction Blood transfusion has health-related, economical and safety implications. In order to optimise the transfusion chain, comprehensive research data are needed. The Dutch Transfusion Data warehouse (DTD) project aims to establish a data warehouse where data from donors and transfusion recipients are linked. This paper describes the design of the data warehouse, challenges and illustrative applications. Study design and methods Quantitative data on blood donors (eg, age, blood group, antibodies) and products (type of product, processing, storage time) are obtained from the national blood bank. These are linked to data on the transfusion recipients (eg, transfusions administered, patient diagnosis, surgical procedures, laboratory parameters), which are extracted from hospital electronic health records. Applications Expected scientific contributions are illustrated for 4 applications: determine risk factors, predict blood use, benchmark blood use and optimise process efficiency. For each application, examples of research questions are given and analyses planned. Conclusions The DTD project aims to build a national, continuously updated transfusion data warehouse. These data have a wide range of applications, on the donor/production side, recipient studies on blood usage and benchmarking and donor–recipient studies, which ultimately can contribute to the efficiency and safety of blood transfusion. PMID:27491665
Benchmarking: applications to transfusion medicine.
Apelseth, Torunn Oveland; Molnar, Laura; Arnold, Emmy; Heddle, Nancy M
2012-10-01
Benchmarking is as a structured continuous collaborative process in which comparisons for selected indicators are used to identify factors that, when implemented, will improve transfusion practices. This study aimed to identify transfusion medicine studies reporting on benchmarking, summarize the benchmarking approaches used, and identify important considerations to move the concept of benchmarking forward in the field of transfusion medicine. A systematic review of published literature was performed to identify transfusion medicine-related studies that compared at least 2 separate institutions or regions with the intention of benchmarking focusing on 4 areas: blood utilization, safety, operational aspects, and blood donation. Forty-five studies were included: blood utilization (n = 35), safety (n = 5), operational aspects of transfusion medicine (n = 5), and blood donation (n = 0). Based on predefined criteria, 7 publications were classified as benchmarking, 2 as trending, and 36 as single-event studies. Three models of benchmarking are described: (1) a regional benchmarking program that collects and links relevant data from existing electronic sources, (2) a sentinel site model where data from a limited number of sites are collected, and (3) an institutional-initiated model where a site identifies indicators of interest and approaches other institutions. Benchmarking approaches are needed in the field of transfusion medicine. Major challenges include defining best practices and developing cost-effective methods of data collection. For those interested in initiating a benchmarking program, the sentinel site model may be most effective and sustainable as a starting point, although the regional model would be the ideal goal. Copyright © 2012 Elsevier Inc. All rights reserved.
42 CFR 440.330 - Benchmark health benefits coverage.
Code of Federal Regulations, 2011 CFR
2011-10-01
... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is health...) Federal Employees Health Benefit Plan Equivalent Coverage (FEHBP—Equivalent Health Insurance Coverage). A benefit plan equivalent to the standard Blue Cross/Blue Shield preferred provider option service benefit...
42 CFR 440.330 - Benchmark health benefits coverage.
Code of Federal Regulations, 2014 CFR
2014-10-01
... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is health...) Federal Employees Health Benefit Plan Equivalent Coverage (FEHBP—Equivalent Health Insurance Coverage). A benefit plan equivalent to the standard Blue Cross/Blue Shield preferred provider option service benefit...
42 CFR 440.330 - Benchmark health benefits coverage.
Code of Federal Regulations, 2013 CFR
2013-10-01
... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is health...) Federal Employees Health Benefit Plan Equivalent Coverage (FEHBP—Equivalent Health Insurance Coverage). A benefit plan equivalent to the standard Blue Cross/Blue Shield preferred provider option service benefit...
42 CFR 440.330 - Benchmark health benefits coverage.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Benchmark-Equivalent Coverage § 440.330 Benchmark health benefits coverage. Benchmark coverage is health...) Federal Employees Health Benefit Plan Equivalent Coverage (FEHBP—Equivalent Health Insurance Coverage). A benefit plan equivalent to the standard Blue Cross/Blue Shield preferred provider option service benefit...
On geodynamo integrations conserving momentum flux
NASA Astrophysics Data System (ADS)
Wu, C.; Roberts, P. H.
2012-12-01
The equations governing the geodynamo are most often integrated by representing the magnetic field and fluid velocity by toroidal and poloidal scalars (for example, MAG code [1]). This procedure does not automatically conserve the momentum flux. The results can, particularly for flows with large shear, introduce significant errors, unless the viscosity is artificially increased. We describe a method that evades this difficulty, by solving the momentum equation directly while properly conserving momentum. It finds pressure by FFT and cyclic reduction, and integrates the governing equations on overlapping grids so avoiding the pole problem. The number of operations per time step is proportional to N3 where N is proportional to the number of grid points in each direction. This contrasts with the order N4 operations of standard spectral transform methods. The method is easily parallelized. It can also be easily adapted to schemes such as the Weighted Essentially Non-Oscillatory (WENO) method [2], a flux based procedure based on upwinding that is numerically stable even for zero explicit viscosity. The method has been successfully used to investigate the generation of magnetic fields by flows confined to spheroidal containers and driven by precessional and librational forcing [3, 4]. For spherical systems it satisfies dynamo benchmarks [5]. [1] MAG, http://www.geodynamics.org/cig/software/mag [2] Liu, XD, Osher, S and Chan, T, Weighted Essentially Nonoscillatory Schemes, J. Computational Physics, 115, 200-212, 1994. [3] Wu, CC and Roberts, PH, On a dynamo driven by topographic precession, Geophysical & Astrophysical Fluid Dynamics, 103, 467-501, (DOI: 10.1080/03091920903311788), 2009. [4] Wu, CC and Roberts, PH, On a dynamo driven topographically by longitudinal libration, Geophysical & Astrophysical Fluid Dynamics, DOI:10.1080/03091929.2012.682990, 2012. [5] Christensen, U, et al., A numerical dynamo benchmark, Phys. Earth Planet Int., 128, 25-34, 2001.
Inaba, Masanori; Quinson, Jonathan; Bucher, Jan Rudolf; Arenz, Matthias
2018-03-16
We present a step-by-step tutorial to prepare proton exchange membrane fuel cell (PEMFC) catalysts, consisting of Pt nanoparticles (NPs) supported on a high surface area carbon, and to test their performance in thin film rotating disk electrode (TF-RDE) measurements. The TF-RDE methodology is widely used for catalyst screening; nevertheless, the measured performance sometimes considerably differs among research groups. These uncertainties impede the advancement of new catalyst materials and, consequently, several authors discussed possible best practice methods and the importance of benchmarking. The visual tutorial highlights possible pitfalls in the TF-RDE testing of Pt/C catalysts. A synthesis and testing protocol to assess standard Pt/C catalysts is introduced that can be used together with polycrystalline Pt disks as benchmark catalysts. In particular, this study highlights how the properties of the catalyst film on the glassy carbon (GC) electrode influence the measured performance in TF-RDE testing. To obtain thin, homogeneous catalyst films, not only the catalyst preparation, but also the ink deposition and drying procedures are essential. It is demonstrated that an adjustment of the ink's pH might be necessary, and how simple control measurements can be used to check film quality. Once reproducible TF-RDE measurements are obtained, determining the Pt loading on the catalyst support (expressed as Pt wt%) and the electrochemical surface area is necessary to normalize the determined reaction rates to either surface area or Pt mass. For the surface area determination, so-called CO stripping, or the determination of the hydrogen underpotential deposition (Hupd) charge, are standard. For the determination of the Pt loading, a straightforward and cheap procedure using digestion in aqua regia with subsequent conversion of Pt(IV) to Pt(II) and UV-vis measurements is introduced.
Inaba, Masanori; Quinson, Jonathan; Bucher, Jan Rudolf; Arenz, Matthias
2018-01-01
We present a step-by-step tutorial to prepare proton exchange membrane fuel cell (PEMFC) catalysts, consisting of Pt nanoparticles (NPs) supported on a high surface area carbon, and to test their performance in thin film rotating disk electrode (TF-RDE) measurements. The TF-RDE methodology is widely used for catalyst screening; nevertheless, the measured performance sometimes considerably differs among research groups. These uncertainties impede the advancement of new catalyst materials and, consequently, several authors discussed possible best practice methods and the importance of benchmarking. The visual tutorial highlights possible pitfalls in the TF-RDE testing of Pt/C catalysts. A synthesis and testing protocol to assess standard Pt/C catalysts is introduced that can be used together with polycrystalline Pt disks as benchmark catalysts. In particular, this study highlights how the properties of the catalyst film on the glassy carbon (GC) electrode influence the measured performance in TF-RDE testing. To obtain thin, homogeneous catalyst films, not only the catalyst preparation, but also the ink deposition and drying procedures are essential. It is demonstrated that an adjustment of the ink's pH might be necessary, and how simple control measurements can be used to check film quality. Once reproducible TF-RDE measurements are obtained, determining the Pt loading on the catalyst support (expressed as Pt wt%) and the electrochemical surface area is necessary to normalize the determined reaction rates to either surface area or Pt mass. For the surface area determination, so-called CO stripping, or the determination of the hydrogen underpotential deposition (Hupd) charge, are standard. For the determination of the Pt loading, a straightforward and cheap procedure using digestion in aqua regia with subsequent conversion of Pt(IV) to Pt(II) and UV-vis measurements is introduced. PMID:29608166
Ó Conchúir, Shane; Barlow, Kyle A; Pache, Roland A; Ollikainen, Noah; Kundert, Kale; O'Meara, Matthew J; Smith, Colin A; Kortemme, Tanja
2015-01-01
The development and validation of computational macromolecular modeling and design methods depend on suitable benchmark datasets and informative metrics for comparing protocols. In addition, if a method is intended to be adopted broadly in diverse biological applications, there needs to be information on appropriate parameters for each protocol, as well as metrics describing the expected accuracy compared to experimental data. In certain disciplines, there exist established benchmarks and public resources where experts in a particular methodology are encouraged to supply their most efficient implementation of each particular benchmark. We aim to provide such a resource for protocols in macromolecular modeling and design. We present a freely accessible web resource (https://kortemmelab.ucsf.edu/benchmarks) to guide the development of protocols for protein modeling and design. The site provides benchmark datasets and metrics to compare the performance of a variety of modeling protocols using different computational sampling methods and energy functions, providing a "best practice" set of parameters for each method. Each benchmark has an associated downloadable benchmark capture archive containing the input files, analysis scripts, and tutorials for running the benchmark. The captures may be run with any suitable modeling method; we supply command lines for running the benchmarks using the Rosetta software suite. We have compiled initial benchmarks for the resource spanning three key areas: prediction of energetic effects of mutations, protein design, and protein structure prediction, each with associated state-of-the-art modeling protocols. With the help of the wider macromolecular modeling community, we hope to expand the variety of benchmarks included on the website and continue to evaluate new iterations of current methods as they become available.
Edwards, Roger A; Dee, Deborah; Umer, Amna; Perrine, Cria G; Shealy, Katherine R; Grummer-Strawn, Laurence M
2014-02-01
A substantial proportion of US maternity care facilities engage in practices that are not evidence-based and that interfere with breastfeeding. The CDC Survey of Maternity Practices in Infant Nutrition and Care (mPINC) showed significant variation in maternity practices among US states. The purpose of this article is to use benchmarking techniques to identify states within relevant peer groups that were top performers on mPINC survey indicators related to breastfeeding support. We used 11 indicators of breastfeeding-related maternity care from the 2011 mPINC survey and benchmarking techniques to organize and compare hospital-based maternity practices across the 50 states and Washington, DC. We created peer categories for benchmarking first by region (grouping states by West, Midwest, South, and Northeast) and then by size (grouping states by the number of maternity facilities and dividing each region into approximately equal halves based on the number of facilities). Thirty-four states had scores high enough to serve as benchmarks, and 32 states had scores low enough to reflect the lowest score gap from the benchmark on at least 1 indicator. No state served as the benchmark on more than 5 indicators and no state was furthest from the benchmark on more than 7 indicators. The small peer group benchmarks in the South, West, and Midwest were better than the large peer group benchmarks on 91%, 82%, and 36% of the indicators, respectively. In the West large, the Midwest large, the Midwest small, and the South large peer groups, 4-6 benchmarks showed that less than 50% of hospitals have ideal practice in all states. The evaluation presents benchmarks for peer group state comparisons that provide potential and feasible targets for improvement.
Hospital benchmarking: are U.S. eye hospitals ready?
de Korne, Dirk F; van Wijngaarden, Jeroen D H; Sol, Kees J C A; Betz, Robert; Thomas, Richard C; Schein, Oliver D; Klazinga, Niek S
2012-01-01
Benchmarking is increasingly considered a useful management instrument to improve quality in health care, but little is known about its applicability in hospital settings. The aims of this study were to assess the applicability of a benchmarking project in U.S. eye hospitals and compare the results with an international initiative. We evaluated multiple cases by applying an evaluation frame abstracted from the literature to five U.S. eye hospitals that used a set of 10 indicators for efficiency benchmarking. Qualitative analysis entailed 46 semistructured face-to-face interviews with stakeholders, document analyses, and questionnaires. The case studies only partially met the conditions of the evaluation frame. Although learning and quality improvement were stated as overall purposes, the benchmarking initiative was at first focused on efficiency only. No ophthalmic outcomes were included, and clinicians were skeptical about their reporting relevance and disclosure. However, in contrast with earlier findings in international eye hospitals, all U.S. hospitals worked with internal indicators that were integrated in their performance management systems and supported benchmarking. Benchmarking can support performance management in individual hospitals. Having a certain number of comparable institutes provide similar services in a noncompetitive milieu seems to lay fertile ground for benchmarking. International benchmarking is useful only when these conditions are not met nationally. Although the literature focuses on static conditions for effective benchmarking, our case studies show that it is a highly iterative and learning process. The journey of benchmarking seems to be more important than the destination. Improving patient value (health outcomes per unit of cost) requires, however, an integrative perspective where clinicians and administrators closely cooperate on both quality and efficiency issues. If these worlds do not share such a relationship, the added "public" value of benchmarking in health care is questionable.
Development and application of freshwater sediment-toxicity benchmarks for currently used pesticides
Nowell, Lisa H.; Norman, Julia E.; Ingersoll, Christopher G.; Moran, Patrick W.
2016-01-01
Sediment-toxicity benchmarks are needed to interpret the biological significance of currently used pesticides detected in whole sediments. Two types of freshwater sediment benchmarks for pesticides were developed using spiked-sediment bioassay (SSB) data from the literature. These benchmarks can be used to interpret sediment-toxicity data or to assess the potential toxicity of pesticides in whole sediment. The Likely Effect Benchmark (LEB) defines a pesticide concentration in whole sediment above which there is a high probability of adverse effects on benthic invertebrates, and the Threshold Effect Benchmark (TEB) defines a concentration below which adverse effects are unlikely. For compounds without available SSBs, benchmarks were estimated using equilibrium partitioning (EqP). When a sediment sample contains a pesticide mixture, benchmark quotients can be summed for all detected pesticides to produce an indicator of potential toxicity for that mixture. Benchmarks were developed for 48 pesticide compounds using SSB data and 81 compounds using the EqP approach. In an example application, data for pesticides measured in sediment from 197 streams across the United States were evaluated using these benchmarks, and compared to measured toxicity from whole-sediment toxicity tests conducted with the amphipod Hyalella azteca (28-d exposures) and the midge Chironomus dilutus (10-d exposures). Amphipod survival, weight, and biomass were significantly and inversely related to summed benchmark quotients, whereas midge survival, weight, and biomass showed no relationship to benchmarks. Samples with LEB exceedances were rare (n = 3), but all were toxic to amphipods (i.e., significantly different from control). Significant toxicity to amphipods was observed for 72% of samples exceeding one or more TEBs, compared to 18% of samples below all TEBs. Factors affecting toxicity below TEBs may include the presence of contaminants other than pesticides, physical/chemical characteristics of sediment, and uncertainty in TEB values. Additional evaluations of benchmarks in relation to sediment chemistry and toxicity are ongoing.
40 CFR 141.172 - Disinfection profiling and benchmarking.
Code of Federal Regulations, 2011 CFR
2011-07-01
... benchmarking. 141.172 Section 141.172 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED... Disinfection-Systems Serving 10,000 or More People § 141.172 Disinfection profiling and benchmarking. (a... sanitary surveys conducted by the State. (c) Disinfection benchmarking. (1) Any system required to develop...
42 CFR 440.390 - Assurance of transportation.
Code of Federal Regulations, 2014 CFR
2014-10-01
...-Equivalent Coverage § 440.390 Assurance of transportation. If a benchmark or benchmark-equivalent plan does... nevertheless assure that emergency and non-emergency transportation is covered for beneficiaries enrolled in the benchmark or benchmark-equivalent plan, as required under § 431.53 of this chapter. ...
42 CFR 440.390 - Assurance of transportation.
Code of Federal Regulations, 2012 CFR
2012-10-01
...-Equivalent Coverage § 440.390 Assurance of transportation. If a benchmark or benchmark-equivalent plan does... nevertheless assure that emergency and non-emergency transportation is covered for beneficiaries enrolled in the benchmark or benchmark-equivalent plan, as required under § 431.53 of this chapter. ...
42 CFR 440.390 - Assurance of transportation.
Code of Federal Regulations, 2011 CFR
2011-10-01
...-Equivalent Coverage § 440.390 Assurance of transportation. If a benchmark or benchmark-equivalent plan does... nevertheless assure that emergency and non-emergency transportation is covered for beneficiaries enrolled in the benchmark or benchmark-equivalent plan, as required under § 431.53 of this chapter. ...
42 CFR 440.390 - Assurance of transportation.
Code of Federal Regulations, 2010 CFR
2010-10-01
...-Equivalent Coverage § 440.390 Assurance of transportation. If a benchmark or benchmark-equivalent plan does... nevertheless assure that emergency and non-emergency transportation is covered for beneficiaries enrolled in the benchmark or benchmark-equivalent plan, as required under § 431.53 of this chapter. ...
42 CFR 440.390 - Assurance of transportation.
Code of Federal Regulations, 2013 CFR
2013-10-01
...-Equivalent Coverage § 440.390 Assurance of transportation. If a benchmark or benchmark-equivalent plan does... nevertheless assure that emergency and non-emergency transportation is covered for beneficiaries enrolled in the benchmark or benchmark-equivalent plan, as required under § 431.53 of this chapter. ...
The Zoo, Benchmarks & You: How To Reach the Oregon State Benchmarks with Zoo Resources.
ERIC Educational Resources Information Center
2002
This document aligns Oregon state educational benchmarks and standards with Oregon Zoo resources. Benchmark areas examined include English, mathematics, science, social studies, and career and life roles. Brief descriptions of the programs offered by the zoo are presented. (SOE)
The Isprs Benchmark on Indoor Modelling
NASA Astrophysics Data System (ADS)
Khoshelham, K.; Díaz Vilariño, L.; Peter, M.; Kang, Z.; Acharya, D.
2017-09-01
Automated generation of 3D indoor models from point cloud data has been a topic of intensive research in recent years. While results on various datasets have been reported in literature, a comparison of the performance of different methods has not been possible due to the lack of benchmark datasets and a common evaluation framework. The ISPRS benchmark on indoor modelling aims to address this issue by providing a public benchmark dataset and an evaluation framework for performance comparison of indoor modelling methods. In this paper, we present the benchmark dataset comprising several point clouds of indoor environments captured by different sensors. We also discuss the evaluation and comparison of indoor modelling methods based on manually created reference models and appropriate quality evaluation criteria. The benchmark dataset is available for download at: http://www2.isprs.org/commissions/comm4/wg5/benchmark-on-indoor-modelling.html.
Combining Phase Identification and Statistic Modeling for Automated Parallel Benchmark Generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Ye; Ma, Xiaosong; Liu, Qing Gary
2015-01-01
Parallel application benchmarks are indispensable for evaluating/optimizing HPC software and hardware. However, it is very challenging and costly to obtain high-fidelity benchmarks reflecting the scale and complexity of state-of-the-art parallel applications. Hand-extracted synthetic benchmarks are time-and labor-intensive to create. Real applications themselves, while offering most accurate performance evaluation, are expensive to compile, port, reconfigure, and often plainly inaccessible due to security or ownership concerns. This work contributes APPRIME, a novel tool for trace-based automatic parallel benchmark generation. Taking as input standard communication-I/O traces of an application's execution, it couples accurate automatic phase identification with statistical regeneration of event parameters tomore » create compact, portable, and to some degree reconfigurable parallel application benchmarks. Experiments with four NAS Parallel Benchmarks (NPB) and three real scientific simulation codes confirm the fidelity of APPRIME benchmarks. They retain the original applications' performance characteristics, in particular the relative performance across platforms.« less
AlZhrani, Gmaan; Alotaibi, Fahad; Azarnoush, Hamed; Winkler-Schwartz, Alexander; Sabbagh, Abdulrahman; Bajunaid, Khalid; Lajoie, Susanne P; Del Maestro, Rolando F
2015-01-01
Assessment of neurosurgical technical skills involved in the resection of cerebral tumors in operative environments is complex. Educators emphasize the need to develop and use objective and meaningful assessment tools that are reliable and valid for assessing trainees' progress in acquiring surgical skills. The purpose of this study was to develop proficiency performance benchmarks for a newly proposed set of objective measures (metrics) of neurosurgical technical skills performance during simulated brain tumor resection using a new virtual reality simulator (NeuroTouch). Each participant performed the resection of 18 simulated brain tumors of different complexity using the NeuroTouch platform. Surgical performance was computed using Tier 1 and Tier 2 metrics derived from NeuroTouch simulator data consisting of (1) safety metrics, including (a) volume of surrounding simulated normal brain tissue removed, (b) sum of forces utilized, and (c) maximum force applied during tumor resection; (2) quality of operation metric, which involved the percentage of tumor removed; and (3) efficiency metrics, including (a) instrument total tip path lengths and (b) frequency of pedal activation. All studies were conducted in the Neurosurgical Simulation Research Centre, Montreal Neurological Institute and Hospital, McGill University, Montreal, Canada. A total of 33 participants were recruited, including 17 experts (board-certified neurosurgeons) and 16 novices (7 senior and 9 junior neurosurgery residents). The results demonstrated that "expert" neurosurgeons resected less surrounding simulated normal brain tissue and less tumor tissue than residents. These data are consistent with the concept that "experts" focused more on safety of the surgical procedure compared with novices. By analyzing experts' neurosurgical technical skills performance on these different metrics, we were able to establish benchmarks for goal proficiency performance training of neurosurgery residents. This study furthers our understanding of expert neurosurgical performance during the resection of simulated virtual reality tumors and provides neurosurgical trainees with predefined proficiency performance benchmarks designed to maximize the learning of specific surgical technical skills. Copyright © 2015 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Peeters, Dominique; Sekeris, Elke; Verschaffel, Lieven; Luwel, Koen
2017-01-01
Some authors argue that age-related improvements in number line estimation (NLE) performance result from changes in strategy use. More specifically, children’s strategy use develops from only using the origin of the number line, to using the origin and the endpoint, to eventually also relying on the midpoint of the number line. Recently, Peeters et al. (unpublished) investigated whether the provision of additional unlabeled benchmarks at 25, 50, and 75% of the number line, positively affects third and fifth graders’ NLE performance and benchmark-based strategy use. It was found that only the older children benefitted from the presence of these benchmarks at the quartiles of the number line (i.e., 25 and 75%), as they made more use of these benchmarks, leading to more accurate estimates. A possible explanation for this lack of improvement in third graders might be their inability to correctly link the presented benchmarks with their corresponding numerical values. In the present study, we investigated whether labeling these benchmarks with their corresponding numerical values, would have a positive effect on younger children’s NLE performance and quartile-based strategy use as well. Third and sixth graders were assigned to one of three conditions: (a) a control condition with an empty number line bounded by 0 at the origin and 1,000 at the endpoint, (b) an unlabeled condition with three additional external benchmarks without numerical labels at 25, 50, and 75% of the number line, and (c) a labeled condition in which these benchmarks were labeled with 250, 500, and 750, respectively. Results indicated that labeling the benchmarks has a positive effect on third graders’ NLE performance and quartile-based strategy use, whereas sixth graders already benefited from the mere provision of unlabeled benchmarks. These findings imply that children’s benchmark-based strategy use can be stimulated by adding additional externally provided benchmarks on the number line, but that, depending on children’s age and familiarity with the number range, these additional external benchmarks might need to be labeled. PMID:28713302
Peeters, Dominique; Sekeris, Elke; Verschaffel, Lieven; Luwel, Koen
2017-01-01
Some authors argue that age-related improvements in number line estimation (NLE) performance result from changes in strategy use. More specifically, children's strategy use develops from only using the origin of the number line, to using the origin and the endpoint, to eventually also relying on the midpoint of the number line. Recently, Peeters et al. (unpublished) investigated whether the provision of additional unlabeled benchmarks at 25, 50, and 75% of the number line, positively affects third and fifth graders' NLE performance and benchmark-based strategy use. It was found that only the older children benefitted from the presence of these benchmarks at the quartiles of the number line (i.e., 25 and 75%), as they made more use of these benchmarks, leading to more accurate estimates. A possible explanation for this lack of improvement in third graders might be their inability to correctly link the presented benchmarks with their corresponding numerical values. In the present study, we investigated whether labeling these benchmarks with their corresponding numerical values, would have a positive effect on younger children's NLE performance and quartile-based strategy use as well. Third and sixth graders were assigned to one of three conditions: (a) a control condition with an empty number line bounded by 0 at the origin and 1,000 at the endpoint, (b) an unlabeled condition with three additional external benchmarks without numerical labels at 25, 50, and 75% of the number line, and (c) a labeled condition in which these benchmarks were labeled with 250, 500, and 750, respectively. Results indicated that labeling the benchmarks has a positive effect on third graders' NLE performance and quartile-based strategy use, whereas sixth graders already benefited from the mere provision of unlabeled benchmarks. These findings imply that children's benchmark-based strategy use can be stimulated by adding additional externally provided benchmarks on the number line, but that, depending on children's age and familiarity with the number range, these additional external benchmarks might need to be labeled.
Johnston, Lindsay C; Auerbach, Marc; Kappus, Liana; Emerson, Beth; Zigmont, Jason; Sudikoff, Stephanie N
2014-01-01
GlideScope (GS) is used in pediatric endotracheal intubation (ETI) but requires a different technique compared to direct laryngoscopy (DL). This article was written to evaluate the efficacy of exploration-based learning on procedural performance using GS for ETI of simulated pediatric airways and establish baseline success rates and procedural duration using DL in airway trainers among pediatric providers at various levels. Fifty-five pediatric residents, fellows, and faculty from Pediatric Critical Care, NICU, and Pediatric Emergency Medicine were enrolled. Nine physicians from Pediatric Anesthesia benchmarked expert performance. Participants completed a demographic survey and viewed a video by the GS manufacturer. Subjects spent 15 minutes exploring GS equipment and practicing the intubation procedure. Participants then intubated neonatal, infant, child, and adult airway simulators, using GS and DL, in random order. Time to ETI was recorded. Procedural performance after exploration-based learning, measured as time to successful ETI, was shorter for DL than for GS for neonatal and child airways at the.05 significance level. Time to ETI in adult airway using DL was correlated with experience level (p =.01). Failure rates were not different among subgroups. A brief video and period of exploration-based learning is insufficient for implementing a new technology. Pediatricians at various levels of training intubated simulated airways faster using DL than GS.
British Society of Interventional Radiology Iliac Artery Angioplasty-Stent Registry III
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uberoi, Raman, E-mail: raman.uberoi@orh.nhs.uk; Milburn, Simon; Moss, Jon
2009-09-15
The objective of this study was to audit current practice in iliac artery intervention in the United Kingdom. In 2001 the British Society of Interventional Radiology Iliac Artery Angioplasty-Stent (BIAS) III registry provided the first national database for iliac intervention. It recommended that data collection needed to continue in order to facilitate the dissemination of comparative data to individual units. BIAS III was designed to continue this work and has a simplified data set with an online submission form. Interventionalists were invited to complete a 3-page tick sheet for all iliac angioplasties and stents. Questions covered risk factors, procedural data,more » and outcome. Data for 2233 patients were submitted from 37 institutions over a 43-month period. Consultants performed 80% of the procedures, 62% of which were for claudication. Fifty-four percent of lesions were treated with stents and 25% of patients underwent bilateral intervention, resulting in a residual stenosis of <50% in 98%. Ninety-seven percent of procedures had no limb complication and there was a 98% inpatient survival rate. In conclusion, these figures provide an essential benchmark for both audit and patient information. National databases need to be expanded across the range of interventional procedures, and their collection made simple and, preferably, online.« less
Medical school benchmarking - from tools to programmes.
Wilkinson, Tim J; Hudson, Judith N; Mccoll, Geoffrey J; Hu, Wendy C Y; Jolly, Brian C; Schuwirth, Lambert W T
2015-02-01
Benchmarking among medical schools is essential, but may result in unwanted effects. To apply a conceptual framework to selected benchmarking activities of medical schools. We present an analogy between the effects of assessment on student learning and the effects of benchmarking on medical school educational activities. A framework by which benchmarking can be evaluated was developed and applied to key current benchmarking activities in Australia and New Zealand. The analogy generated a conceptual framework that tested five questions to be considered in relation to benchmarking: what is the purpose? what are the attributes of value? what are the best tools to assess the attributes of value? what happens to the results? and, what is the likely "institutional impact" of the results? If the activities were compared against a blueprint of desirable medical graduate outcomes, notable omissions would emerge. Medical schools should benchmark their performance on a range of educational activities to ensure quality improvement and to assure stakeholders that standards are being met. Although benchmarking potentially has positive benefits, it could also result in perverse incentives with unforeseen and detrimental effects on learning if it is undertaken using only a few selected assessment tools.
42 CFR 457.430 - Benchmark-equivalent health benefits coverage.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 42 Public Health 4 2011-10-01 2011-10-01 false Benchmark-equivalent health benefits coverage. 457... STATES State Plan Requirements: Coverage and Benefits § 457.430 Benchmark-equivalent health benefits coverage. (a) Aggregate actuarial value. Benchmark-equivalent coverage is health benefits coverage that has...
42 CFR 457.430 - Benchmark-equivalent health benefits coverage.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 42 Public Health 4 2013-10-01 2013-10-01 false Benchmark-equivalent health benefits coverage. 457... STATES State Plan Requirements: Coverage and Benefits § 457.430 Benchmark-equivalent health benefits coverage. (a) Aggregate actuarial value. Benchmark-equivalent coverage is health benefits coverage that has...
42 CFR 457.430 - Benchmark-equivalent health benefits coverage.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 42 Public Health 4 2010-10-01 2010-10-01 false Benchmark-equivalent health benefits coverage. 457... STATES State Plan Requirements: Coverage and Benefits § 457.430 Benchmark-equivalent health benefits coverage. (a) Aggregate actuarial value. Benchmark-equivalent coverage is health benefits coverage that has...